report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
GSA operates EPLS through funds obtained from 24 federal agencies in support of the Integrated Acquisition Environment (IAE), a bundle of services established to streamline the federal government acquisition process. Agencies are required to report excluded parties by entering information directly into the EPLS database within 5 working days after the exclusion becomes effective. When a business is excluded, the action extends to all its divisions and organizational units, as well as specifically named affiliates. Affiliates may include businesses with interlocking management, shared facilities, and equipment; new businesses with the same ownership and employees as previously excluded businesses; and common interests among family members. The Federal Acquisition Regulation (FAR) includes a list of the information to be entered in EPLS, such as the individual’s or business’s name and address, a code signifying the cause of the exclusion, the length of the exclusion, the name of the agency taking the action, and the contractor identification number, if applicable. With regard to the latter, for firms the FAR requires entrance of a DUNS number—a unique nine- digit identification number assigned by Dun & Bradstreet, Inc. If available and disclosure is authorized, excluding agencies should also enter an employer identification number (EIN), other taxpayer identification number (TIN), or a Social Security number (SSN) if excluding an individual. Department of Defense agencies may also enter a Commercial and Government Entity (CAGE) code, a unique identifier assigned by the department. Before awarding contracts or making purchases from GSA’s Federal Supply Schedule, contracting officers and other agency officials are required to check EPLS to ensure that a prospective vendor is not an excluded party. Generally, excluded parties may complete their performance on preexisting contracts. However, agencies must check EPLS prior to making any modifications that add new work or extend the period of performance, unless a waiver is granted by the head of the agency. Businesses and individuals that have been excluded for egregious offenses are continuing to receive federal contracts and other funds. We developed case studies on several of these excluded parties and found that they continued to receive contracts and other federal payments in part because agency officials failed to search EPLS or because their searches did not reveal that the entity was excluded as a result of system deficiencies. In other cases, these searches did not reveal exclusions because the excluded businesses and individuals were fraudulently operating under different identities. We also identified one case where the Army chose to continue doing business with an excluded party despite its debarment. Table 1 highlights 15 of the case studies we developed. More detailed information on 10 of these cases follows the table. An additional 10 cases are listed in appendix II. Case 1: GSA debarred this company and its principals in May 2007 for conspiring to defraud the government by affixing false manufacturing labels on chemicals they were selling to GSA. In addition, investigators from the Environmental Protection Agency (EPA) and the Drug Enforcement Agency learned this company was selling an ozone-depleting chemical to a company that in turn sold the chemical to individuals for the illegal production of methamphetamines. Despite its debarment, the company has since received over $1 million in awards from four different federal agencies; the majority of these awards were made by USDA and GSA. USDA officials told us that they were exercising an option year on a previously existing contract with the company and that their internal procedures did not require them to conduct an EPLS search prior to awarding the company $700,000 associated with the option. However, the officials were mistaken: the FAR states that options will not be exercised with debarred parties unless the head of an agency makes a determination that the agency should continue the contract. Furthermore, when we asked GSA officials why they were doing business with a company they had recently debarred, they told us that it was not the same company. Specifically, they told us that they had checked EPLS and found that the company that they were currently doing business with had a different address than the company they originally debarred, even though both shared the same name. But when we examined records associated with the debarment, we were able to confirm that it was in fact the same company. GSA’s debarring official had mistakenly entered the company’s attorney’s address into EPLS instead of its business address. After we notified GSA, they corrected the entry in May 2008. However, because of the incorrect address and lack of DUNS, agencies that conducted EPLS searches related to this company prior to May 2008 would have been unable to determine that it was debarred. According to one of the company’s principals, they continued to accept federal funds during the debarment because agencies continued to place orders on existing contracts; the principals did not feel obligated to point out that the agencies were in error. Case 2: In 2003 and 2004, the Navy debarred this company, its 20 subsidiaries, and several of its executives (including its comptroller, treasurer, and president/co-owner) in conjunction with a massive tax fraud scheme. Specifically, the company pleaded guilty in November 2002 of conspiring to defraud the Department of Defense through falsified cost claims and money laundering related to its business of providing cable television service to U.S. military installations. Prior to this plea, the company president/co-owner fled to New Zealand via Canada and Barbados under a Grenadian passport obtained in the name of a deceased former neighbor. He was apprehended by Australian police in 2002 while attempting to obtain a Canadian visa in Sydney and was extradited 4 years later. He was eventually convicted of tax evasion, false claims, and mail fraud and was sentenced to 108 months imprisonment and ordered to pay a $4 million fine. During 2006 and 2007, the Navy lifted the debarments from the parent company and 11 of the company’s subsidiaries because the company’s president and other executives agreed to remove themselves from office. However, the remaining 11 subsidiaries continued to be debarred in part because the company was unable to provide the Navy with evidence that the former president and other executives had actually resigned from day- to-day operations. Despite the subsidiaries’ ongoing debarred status, the Navy awarded $230,000 to 2 of these 11 debarred subsidiaries during 2006 and 2007. About $225,000 of this total was awarded because the Navy searched EPLS using a variation of the company name that was not listed as debarred. Although the parent company continues to be the sole provider of cable for numerous military bases throughout the world, the Navy remains concerned about doing business with the company in part because of its continued relationship with the former president. Specifically, prior to departure from office, the president gifted his 50 percent ownership interest in the company to his wife; she was never debarred but was previously suspended for 5 months beginning October 2006. Currently, she is president and CEO and has assumed management of the corporate staff. As of March 5, 2007, the debarred former president was serving the first 6 months of his sentence under house arrest. Case 4: GSA suspended this computer services company in August 2006 after a conviction for falsification of books and records used for required SEC filings. USDA awarded the company $120,000 in September 2006. Although USDA procurement staff searched for the correct company in EPLS, they left out a comma when spelling the name, and the suspension did not appear. Case 5: In September 2006, GSA suspended this construction company and its president after the president was found to have used fictitious Social Security numbers to open multiple GSA auction accounts to bid on surplus property. These fraudulent accounts allowed him to continue to bid on property from GSA while his primary account was in default for nonpayment. Despite this suspension, Interior made seven awards in 2007 to the company totaling over $230,000. For five of these awards, Interior was unable to provide evidence that EPLS was checked prior to the award. The remaining two awards were both made within a month of the suspension. Because GSA had failed to enter the suspension information into EPLS in a timely manner, Interior was unaware of the company’s ineligibility. Specifically, GSA did not enter the company into EPLS for more than a month after its suspension, even though the FAR requires agencies to report excluded parties within 5 working days after the exclusion becomes effective. Case 6: This cleaning supply manufacturer was convicted for illegally discharging chemicals into a city sewer system. GSA suspended the company in March 2007. Prior to its suspension, the company had been approved as a GSA Supply Schedule vendor through July 2011. Although agencies are required to check EPLS prior to making purchases through the supply schedule, VA officials assumed that the company was eligible based on its Supply Schedule listing and purchased $1,500 of cleaning products in August 2007. Case 7: The Navy initially contracted with this engineering company to replace 500 “brittle fasteners” on steam pipes on the aircraft carrier U.S.S. John F. Kennedy in February 2006. Subsequently, Navy personnel conducted ongoing inspections of the replacements to verify that they had been properly changed. The Navy suspended the company in April 2006 when it found that one of the company’s employees was replacing the correct fasteners he had recently installed with nonconforming parts after those initially installed had already passed inspection. The employee used this scheme because he had underestimated the number of fasteners he needed to complete the replacement work. According to documents provided by Navy officials, if these pipes had ruptured as a result of faulty fasteners, those aboard the carrier could have suffered lethal burns. Despite these actions, the Navy made three awards worth a total of $110,000 to the company within a month of the suspension because COs did not check EPLS to verify the company’s eligibility. The Navy awarded the company an additional $4,000 when another CO misspelled the company’s name in an EPLS search. Case 9: Treasury suspended this administrative services company in March 2004 for inflating costs on invoices submitted to the IRS. Prior to this suspension, during September 2003, NASA issued a contract to the company for training logistics support services. In a memorandum describing this award decision, NASA made specific reference to ongoing litigation related to cost inflation on IRS invoices, but noted that at that time “neither the IRS, nor the DOJ has initiated suspension or debarment actions.” Even though NASA had knowledge of the case, it failed to check EPLS for a change in contractor eligibility prior to making modifications to the company’s contract in 2006, as required by the FAR. Instead NASA simply relied on its original 2003 EPLS check when increasing the contract’s value, in excess of the minimum contract value, by $450,000. Case 11: The CEO of this electronics company was convicted in June 2004 of making fraudulent purchases with government purchase card information that he stole from Navy officials who were making purchases from his company. The Navy debarred the CEO and his company in October 2005. However, DLA’s automated purchasing system, which does not interface with EPLS, placed an order with the company during its debarment for $3,000 worth of electrical components. In addition, the CEO created a “new” company using a slightly altered business name, different DUNS numbers, and CAGE codes—the three primary unique identifiers used to locate a firm within EPLS. He was then able to receive an additional $30,000 in awards during 2006 and 2007 from DLA. Our investigation also revealed that this second company shares the same address, phone number, and bank account with the debarred company. Case 12: This case involves a debarred individual who used a series of ownership changes to allow his durable medical equipment company to continue to receive reimbursements from Medicare. In April 2003, HHS debarred the owner for 5 years after he pleaded guilty to wire fraud and Medicare fraud related to a scheme in which he used his company to sell medically unnecessary incontinence kits to nursing homes. Because HHS did not debar the individual’s company, he transferred ownership of the company to his wife in an attempt to continue receiving Medicare reimbursements. HHS objected to this transfer and threatened to debar the entire company unless another owner could be found. The couple then sold the business to a neighbor. After 2 years, citing financial difficulties, the neighbor defaulted on her obligations and returned the business to the original owner’s wife. After the wife reassumed control of the company, she legally changed her last name back to her maiden name, even though she was still married to the original owner. She admitted to our investigators that she did so to avoid “difficulties” in conducting business using the same name as a convicted criminal. She also transferred the full assets of her husband’s former company to a preexisting durable medical equipment company that she also owned and changed the name under which the company would do business. The couple told us, and the Medicare program confirmed, that the business continued to receive reimbursements from Medicare for the remainder of the husband’s debarment. The husband’s debarment terminated in April 2008, and he has returned to running the original company’s day-to-day operations. Case 13: GSA debarred the owner of this aircraft adhesives company in November 2006 after he was convicted of wire fraud related to a scheme in which he conspired with his subcontractor to fraudulently change expiration dates on adhesives sold to the Navy. The adhesives he sold to the Navy were 5 years out of date. As part of the debarment, GSA entered into an administrative compliance agreement with the owner that allowed his company to continue do business with the federal government. This agreement was based in part on the owner’s assertion that he had voluntarily built a “firewall” between himself and the day-to-day operations of his company. However, our investigation revealed that the owner misled GSA and was in reality still continuing to run the company through an intermediary by using anonymous e-mail accounts and untraceable prepaid cell phones. Specifically, the intermediary, who was supposedly in charge of daily operations, told us that he e-mailed all transactions and communications to the debarred owner for review. This information included contracts, government orders, and orders from suppliers. In addition, the intermediary told us that he provided the owner with daily updates on company operations using the prepaid cell phones. In order to prevent detection, the intermediary drove miles away from the company every day at lunch to place the calls. Using this scheme, the owner was able to continue to run the company, receiving $700,000 in improper payments since the administrative compliance agreement went into effect. Case 15: The Army decided to pay this company millions of dollars even though it had debarred the company and its president for attempting to illegally sell nuclear bomb parts to North Korea. Although the Army had several options for terminating its contract with the company, it is not clear if the Army considered these options because the officials we spoke with were not sure of the exact circumstances surrounding the decision. In March 2003, the U.S. Army Contracting Command for Europe awarded a German company a contract with two 1-year options to provide “civilian on the battlefield” actors to participate in training exercises. These actors were not required to have any specialized skills, other than speaking some English. In July 2005, the Army debarred the company and its president based on the president’s 2004 attempt to illegally ship dual use aluminum tubes, which can be used to develop nuclear bombs, to North Korea. German customs authorities had twice denied the president a license to ship the aluminum tubes to North Korea, once in 2002 and again in 2003, and specifically told him that the tubes were likely to be used for the “North Korean nuclear program.” Despite this warning, the president attempted to smuggle the aluminum tubes to southeast Asia aboard a French vessel and misled German authorities by telling them that the tubes had been returned to a vendor in the United Kingdom. Germany subsequently convicted the president under the German Federal Foreign Trade Act and the Federal Weapons of War Control Act. In its decision to debar the company, Army officials stated that because the president “sold potential nuclear bomb making materials to a well-known enemy of the United States,” the United States has “a compelling interest to discontinue any business with this morally bankrupt individual” and that continuing to do business with the company would be “irresponsible.” The contractor notified the Command of the proposed debarment in May 2005, but the Command decided that the action did not prohibit it from continuing to do business with the company. Ultimately, the Army paid the company in excess of $4 million throughout fiscal year 2006. One potential avenue for termination that the Army could have considered relates to a contractual provision that stated “contractors performing services in the Federal Republic of Germany shall comply with German law…. Compliance with this clause and German law is a material contract requirement. Noncompliance by the Contractor or Subcontractor at any tier shall be grounds for issuing a negative past performance evaluation and terminating this contract, task order, or delivery order for default.” Even though the company violated the German Federal Foreign Trade Act and the Federal Weapons of War Control Act, the Army Command officials we spoke with did not indicate that this option had been considered. Moreover, the Command officials told us that the Army was “legally obligated” to continue the contract based on the provision in the FAR that specifies that “agencies may continue contracts or subcontracts in existence at the time the contractor was debarred, suspended, or proposed for debarment unless the agency head directs otherwise.” However, although this provision does grant the Army the authority to continue the contract, it does not obligate the Army to do so. In fact, the FAR permits the federal government to terminate contracts for convenience and for default, depending on the circumstances. Although the Command officials we spoke with told us that both these options had been considered, when we asked for more detailed information, they told us that they were not involved in the decision-making process and were not sure of the exact circumstances surrounding the decision. In addition, there was no contemporaneous documentation to support the decision. Thus, the Command continued to pay the company millions of dollars, even though the Army had determined that doing business with the company would be “irresponsible.” Most of the improper awards and payments we identified can be attributed to ineffective management of the EPLS database or to control weaknesses at both excluding and procuring agencies. For example, our cases and analyses of EPLS data show that EPLS entries may lack DUNS numbers, the database had insufficient search capabilities, and that a number of the listed points of contact for further information about exclusions were incorrect. Although we did not conduct a comprehensive review of each agency’s controls, our cases studies also show that excluding agencies failed to enter information into EPLS in a timely manner and that procuring agencies failed to check EPLS prior to making awards, including purchases from the GSA Schedule. To illustrate the latter issue, we used our own purchase card to buy body armor worth over $3,000 off the Supply Schedule from a company that had been debarred for falsifying tests related to the safety of its products. As described below, our cases and analysis of EPLS data demonstrate that no single agency is proactively monitoring the content or function of the database: EPLS Contains Incomplete Information: As of July 2007, GSA updated EPLS to prevent excluding agencies from completing an entry without entering a DUNS number. This modification, which was made in response to an earlier GAO recommendation, was intended to enable agencies to determine with confidence that a prospective vendor was not currently excluded. However, during our initial analysis of the 437 firms entered into EPLS between June 29, 2007, and January 23, 2008, we found that 38—9 percent—-did not have any information listed in the DUNS field. According to GSA, agencies may have been able to complete these entries without a DUNS number because they were modifications of existing records. For example, if an agency suspended a company prior to July 2007 and then updated that entry in September 2007 to reflect that the company had subsequently been debarred, the agency would not be required to enter a DUNS number. This discrepancy means that only new exclusions entered after the July 2007 effective date require a DUNS number in order to complete an EPLS entry. Without this unique identification information, agencies are forced to rely on name and address matches, making it extremely difficult to definitively identify an excluded party. EPLS Search Functions Are Inadequate: When agency staff query EPLS by name or address to verify vendor eligibility, there is no guarantee that a search will reveal a suspension or debarment action. For example, we identified agencies that conducted “exact name” EPLS searches but still awarded contracts to an excluded party. These agencies did not use correct spelling or punctuation in their searches. Unlike other search engines, an exact name search in EPLS must literally be exact in terms of spelling and punctuation or an excluded party will not be revealed. For example, a party listed as “Company XYZ, Inc.” in EPLS would not be identified if an agency left out the comma in the name and instead conducted a search for “Company XYZ Inc.” Other agencies we identified provided proof that they conducted searches by DUNS numbers but their searches similarly did not reveal any exclusions, even though the companies the agencies were looking for were listed in EPLS with DUNS numbers. We cannot determine why these searches failed. EPLS Agency Points of Contact Are Incorrect: The EPLS Web site lists points of contact for further information regarding specific exclusion actions. This directory covers 59 agencies and lists 78 different individuals. Overall, we were unable to contact suspension and debarment personnel at 15—about 25 percent—of the agencies with listed points of contact. For example, we initially found that 19 of the phone numbers listed were disconnected or otherwise nonfunctioning. In addition, we found that 6 points of contact were incorrect. In one instance, the individual listed had been retired for 5 years. These inaccuracies increase the likelihood that agency staff will be unable to confirm actions with the excluding agency. We identified the following excluding and procuring agency control weaknesses: Excluding Agencies Do Not Always Enter DUNS Numbers: As previously indicated, we found that 38 of the 437 EPLS entries agencies made between June 29, 2007, and January 23, 2008, lacked an entry in the DUNS field. We also found that for 81 additional firms entered into EPLS during the same period, the excluding agency entered a DUNS number of “000000000” or some other nonidentifying information. Therefore, 119 firms in total—27 percent— lacked an identifiable DUNS number. Incorrect DUNS numbers prevent contracting officers and other agency officials from readily identifying debarred or suspended parties when making awards. Agencies Did Not Enter Exclusions in a Timely Manner: The FAR mandates that agencies enter all required information regarding debarment and suspension actions into EPLS within 5 working days after the action becomes effective. However, our case examples identified several instances in which agencies failed to do so. For instance, VA made a purchase from a vendor while the vendor was in the midst of a 1-month suspension for a violation of the antifraud provisions of federal securities laws. Because GSA, the suspending agency, did not enter the action into EPLS until several days after the suspension had been lifted, VA had no mechanism to identify the suspension and thus proceeded with the purchase from the suspended vendor. Contracting Officers Did Not Check EPLS: The FAR requires contracting officers to check that proposed vendors are not listed in EPLS. In six of our case studies, we found that procurement staff made no effort to query EPLS to determine vendor eligibility prior to awarding an initial contract or modifying an existing contract to extend the period of performance or increase the scope of work, resulting in 14 awards to ineligible parties. Automated Purchasing Systems May Not Interface with EPLS: Some agencies use automated systems to process routine purchasing transactions. In this situation, agencies still have a responsibility to verify contractor eligibility before making a purchase. However, unless the automated system is able to interface directly with EPLS, it is possible for the system to unintentionally make purchases from excluded parties. For example, 90 percent of DLA’s annual purchases go through an automated system, which does not interface with EPLS. We identified four instances where DLA contracted with and made payments to excluded parties as a result of using this system. Excluded Parties Remain Listed on the GSA Schedule: Under the Federal Supply Schedule program, GSA establishes long-term governmentwide contracts with commercial firms to provide access to over 11 million commercial supplies and services that can be ordered directly from the contractors or through an on-line shopping and ordering system. GSA requires new vendors to demonstrate that they are responsible and to certify that they are currently eligible for federal contracts. On its Web site, GSA states that the Schedule is a “reliable and proven one-stop online resource” and “offers the most comprehensive selection of approved products and services from GSA contracts.” However, vendors are not removed from the Schedule if they become debarred or suspended. The FAR specifically prohibits agencies from making a Supply Schedule purchase from an excluded contractor. Nonetheless, these GSA Schedule listings can result in agencies purchasing items from unscrupulous vendors. For example, in one of our cases, an agency incorrectly assumed that GSA was responsible for ensuring the ongoing eligibility of vendors listed on the Supply Schedule and thus did not check EPLS before it made purchases from a company that illegally dumped chemicals into city sewers. To verify that no warnings exist to alert agencies that they are making purchases from excluded parties, we used our own GAO purchase card to acquire body armor worth over $3,000 from a Supply Schedule company that had been debarred for falsifying tests related to the safety of its products. Nothing in the purchase process indicated that the company was ineligible to do business with the government and the company did not inform us of its excluded status. On November 18, 2008, we held a corrective action briefing for agencies that were the subjects of our case studies. Attendees at this meeting included representatives from the Army, the Navy, the Air Force, the Defense Logistics Agency, the Department of Energy, the Department of Veterans Affairs, the General Services Administration, and the National Aeronautics and Space Administration. At this briefing, we explained the types of cases we investigated and the overall control weaknesses we identified. In response, GSA officials noted that most of the issues we had identified could be solved through improved training, and the other agencies agreed. We also referred the businesses and individuals discussed in our case studies to the appropriate agency officials for further investigation. EPLS system deficiencies and agency control weaknesses have allowed contractors that have been deemed insufficiently responsible to do business with the government and to receive federal funds during their period of ineligibility. These excluded parties will no doubt continue to benefit unless GSA strengthens its oversight and management of EPLS. More importantly, agencies can prevent improper awards in the future by strictly adhering to the requirement to check EPLS prior to making awards and by entering all information related to excluded parties in an accurate and timely fashion. To improve the effectiveness of the suspension and debarment process, we recommend that the Administrator of General Services take the following five actions: issue guidance to procurement officials on the requirement to check EPLS prior to awarding contracts and to suspension and debarment officials on the 5-day entry and contractor identification number requirements; ensure that the EPLS database requires contractor identification numbers for all actions entered into the system; strengthen EPLS search capabilities to include common search operators, such as AND, NOT, and OR; take steps to ensure that the EPLS points of contact list is updated; and place a warning on the Federal Supply Schedule Web site indicating that prospective purchasers need to check EPLS to determine whether vendors are excluded and explore the feasibility of removing or identifying excluded entities that are listed on the GSA Schedule. In written comments on a draft of this report, GSA concurred with all five recommendations and agreed to use the report’s findings to strengthen controls over the Excluded Parties List System. GSA’s comments are reprinted in appendix III. As part of its response, GSA outlined actions it plans to take or has taken that are designed to address the recommendations. However, most of the actions described do not achieve the intent of these recommendations. In several instances, GSA simply restated its current policies and procedures instead of agreeing to take steps to oversee the completeness of EPLS and ensure that exclusions are properly enforced. Based on our investigation, if GSA is not more proactive in its management of the system, suspended and debarred companies will continue to improperly receive taxpayer dollars. For example, in response to our recommendation to issue guidance to procurement officials on the requirement to check EPLS prior to awarding contracts and to suspension and debarment officials on the 5-day entry and contractor identification number requirements, GSA does not plan to take any new actions. Instead, GSA cited Federal Acquisition Regulation (FAR) requirements already in place and pointed to a two-paragraph section of the EPLS Frequently Asked Questions (FAQ) Web page that existed prior to our investigation. GSA considers the FAQ to be support for closing this recommendation. However, our investigation clearly demonstrates that, despite the existence of this FAQ, agencies are not always checking EPLS prior to awards or entering exclusions in a timely or complete fashion. Moreover, at our corrective action briefing, GSA officials noted, and the other agencies agreed, that most of these problems could be solved through improved training and guidance. If GSA and the other agencies continue to operate the EPLS system as they have, we believe suspended and debarred companies will continue to be able to do business with the government. Therefore, we do not consider the GSA FAQ to be sufficient support to close this recommendation. In response to our recommendation that GSA ensure that the EPLS database requires contractor identification numbers for all actions entered into the system, GSA maintains that it made the entrance of DUNS numbers in EPLS mandatory for organizations and contractors on June 29, 2007. GSA does not plan to take any additional actions and believes that this 2007 action closes the recommendation. However, our investigation clearly demonstrates that EPLS entries for firms lacked contractor identification numbers after June 29, 2007. Specifically, we found that 38 (9 percent) of the 437 firms entered into EPLS between June 29, 2007, and January 23, 2008, did not have any information listed in the DUNS field. We also found that for 81 additional firms entered into EPLS during the same period, the excluding agency entered a DUNS number of “000000000” or some other nonidentifying information. Therefore, 119 firms in total—- 27 percent—- lacked an identifiable DUNS number. In addition to DUNS numbers, the FAR also states that excluding agencies should enter an employer identification number (EIN), other taxpayer identification number (TIN), or a Social Security number (SSN), if these numbers are available and disclosure is authorized. Department of Defense agencies may also enter a Commercial and Government Entity (CAGE) code. However, none of these identification numbers are mandatory in EPLS and the data reliability assessment we conducted at the start of our work showed that they are rarely entered. Without unique identification information, agencies are forced to rely on name and address matches, making it extremely difficult to definitively identify an excluded party when making awards. Consequently, we continue to believe that GSA should take further steps to ensure that the EPLS database requires, at a minimum, contractor identification numbers for all actions entered into the system. We do not consider the recommendation to be closed. In response to our recommendation to strengthen EPLS search capabilities to include common search operators, such as AND, NOT, and OR, GSA noted that EPLS now supports these operators and provided a link to the advanced search tips help site. Our observation is that since we concluded our investigation, EPLS search capabilities have improved. However, there is no link to the advanced search tip site on the EPLS front page, so users may not be able to readily access this information. Specifically, users must first click on “search help,” which provides a list of basic tips, and then scroll down to find the advanced search tip link. Therefore, we consider this recommendation to be open. In response to our recommendation to take steps to ensure that the EPLS points of contact list is updated, GSA explained that while it maintains responsibility for updating the list, it is the responsibility of each agency to notify GSA of any changes to their individual point of contact information. GSA also mentioned that the responsibility of each agency has been addressed at the Interagency Suspension and Debarment Committee and EPLS Advisory Group meetings. In addition, GSA stated that EPLS includes semi-annual automated notifications to verify agency point of contacts and that the EPLS help desk also provides support in identifying current information in response to public user reports of outdated point of contact information. As we noted in our report, the EPLS Web site has a directory that covers 59 agencies and lists 78 different individuals, if additional follow-up is needed. However, we were unable to contact suspension and debarment personnel at 15—about 25 percent—of the agencies with listed points of contact. For example, we initially found that 19 of the phone numbers listed were disconnected or otherwise nonfunctioning. In addition, we found that 6 points of contact were completely incorrect. In one instance, the individual listed had been retired for 5 years. As of February 11, 2009, the date of GSA’s agency comment letter, our follow-up work shows that the majority of these inaccuracies still existed on the EPLS agency contact list. Therefore, it appears that the steps GSA mentions in its comment letter have been ineffective. Although we recognize that agencies have a responsibility to provide GSA with up-to-date information, we think it is reasonable for GSA to proactively manage the completeness and accuracy of the list, especially since they know, as a result of our investigation, that the list has significant errors. In short, we do not consider GSA’s actions to be sufficient to close the recommendation. Finally, we recommended that GSA place a warning on the Federal Supply Schedule Web site indicating that prospective purchasers need to check EPLS to determine whether vendors are excluded and also explore the feasibility of removing or identifying excluded entities that are listed on the GSA Schedule. In response, GSA outlined proposed actions that it believes warrant closing the recommendation. These actions include (1) adding reminders to eCommerce systems to ensure that purchasers are aware of excluded parties prior to placing orders, (2) establishing and placing messages within the Web sites to remind purchasers to check EPLS, and (3) providing direct access links to the EPLS Web site within the GSA Advantage, eBuy, and eLibrary sites so that purchasers have easy access to the system. We support these planned improvements; however, they only address part of our recommendation. With regard to the second part of our recommendation—exploring the feasibility of removing or identifying excluded entities—GSA reiterated the process for terminating a contractor’s Schedule contract without actually stating any actions it would take to address the vulnerability we found. During our investigation, we identified several excluded parties on the Schedule, including a body armor manufacturer that had been debarred for the egregious offense of falsifying tests related to the safety of its products. As shown by this finding, there is currently no way to alert prospective purchasers that a specific Schedule contractor is excluded. We continue to believe it is important for GSA to explore the feasibility of proactively removing or identifying excluded parties that are listed on the Schedule. Therefore, we consider the recommendation to be open. As arranged with your office, we plan no further distribution until 5 days after the date of this report. At that time, we will be sending copies of this report to the Administrator of General Services and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. For further information about this report, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To substantiate the allegation that businesses and individuals improperly received federal funds despite being excluded for egregious offenses, first we obtained a database from the General Services Administration (GSA) of all Excluded Parties List System (EPLS) records that were active between October 1, 2001, and January 23, 2008. This database contained over 125,000 records and included the following fields: unique record identifier, entity name, Social Security number (SSN), taxpayer identification number (TIN), entity classification, Commercial and Government Entity (CAGE) code, exclusion type, cause and treatment code, full address, Data Universal Numbering System (DUNS) number, debarring agency, date of action, date of termination, delete date, archive/current status, and description. We matched the 11,432 DUNS available in EPLS with DUNS numbers appearing in the Federal Procurement Data System-Next Generation (FPDS-NG) for fiscal years 2006 and 2007. Because not all records within EPLS contain DUNS numbers, we also matched these databases by vendor address. We focused our efforts on identifying parties that (1) were excluded governmentwide for egregious offenses such as fraud, false statements, theft, and violations of selected federal statutes and (2) received new contracts in excess of $1,000 during the period of their exclusion. Our objective was not to determine, and we did not have data to determine, the total number of individuals and businesses in EPLS that received new federal awards during their exclusions or the total dollar value of improper awards. To develop case studies, we performed investigative work on a nonrepresentative selection of the contractors that received new awards in excess of $1,000 during their period of exclusion. The investigative work included obtaining and analyzing public records, criminal histories, and conducting interviews. However, we did not conduct an exhaustive investigation of these parties’ business and financial transactions, nor could we determine the total dollar value of improper awards they received. To identify the key causes of the improper awards identified in our case studies, we analyzed matches between EPLS and FPDS-NG, obtained and reviewed agency documentation related to exclusion actions, and obtained and evaluated agency justifications for awards made to excluded parties. We did not conduct a comprehensive review of each agency’s internal controls. To assess the reliability of EPLS data provided by GSA, we (1) reviewed control totals provided by GSA, (2) matched a sample of records provided by GSA to records located at EPLS’s Web site to determine if the data were exported correctly, (3) performed electronic testing of the required data elements for obvious errors in completeness, and (4) interviewed agency officials knowledgeable about the data. As a result of electronic testing, we found missing and illogical entries in required data fields. In addition, EPLS information may have been incomplete for our purposes because of the loss of historic record information. We found several instances in which the action date of an existing record was changed, effectively deleting all evidence of the original record. For example, agency EPLS users can modify almost all information related to existing records. Should an agency need to amend or update an entity’s suspension or debarment record, EPLS does not archive the record that was altered. We were able to confirm this issue with GSA. We found the data to be insufficiently reliable for determining how many excluded parties received new federal awards during their period of exclusion because of the number of missing entries in certain data fields and the lack of an historical archive that results from record modifications; however, the data were sufficient to identify case studies for further investigation. We conducted our audit work and investigative work from December 2007 through November 2008. We conducted our audit work in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. The first 15 cases, numbered 1 through 15, are listed in table 1.
To protect the government's interests, any agency can exclude (i.e., debar or suspend) parties from receiving federal contracts or assistance for a range of offenses. Exclusions of companies or individuals from federal contracts or other funding are listed in the Excluded Parties List System (EPLS), a Web-based system maintained by GSA. Recent allegations indicate that excluded parties have been able to receive federal contracts. As a result, GAO was asked (1) to determine whether these allegations could be substantiated and (2) to identify the key causes of any improper awards and other payments detected. GAO investigated parties that were excluded for offenses such as fraud, theft, and violations of federal statutes and received awards in excess of $1,000. Businesses and individuals that have been excluded for egregious offenses ranging from national security violations to tax fraud are improperly receiving federal contracts and other funds. GAO developed cases on a number of these parties and found that they received funding for a number of reasons, including because agency officials failed to search EPLS or because their searches did not reveal the exclusions. GAO also identified businesses and individuals that were able to circumvent the terms of their exclusions by operating under different identities. GAO's cases include the following: (1)The Army debarred a German company after its president attempted to ship nuclear bomb parts to North Korea. As part of the debarment, Army stated that since the president "sold potential nuclear bomb making materials to a well-known enemy of the United States," there was a "compelling interest to discontinue any business with this morally bankrupt individual." However, Army told GAO it was legally obligated to continue the contract and paid the company over $4 million in fiscal 2006. In fact, the Army had several options for terminating the contract, but it is not clear if these options were considered. (2) The Navy suspended a company after one of its employees sabotaged repairs on an aircraft carrier by using nonconforming parts to replace fasteners on steam pipes. If these pipes had ruptured as a result of faulty fasteners, those aboard the carrier could have suffered lethal burns. Less than a month later, the Navy improperly awarded the company three new contracts because the contracting officer did not check EPLS. Most of the improper contracts and payments GAO identified can be attributed to ineffective management of the EPLS database or to control weaknesses at both excluding and procuring agencies. For example, GAO's work shows that entries may contain incomplete information, the database has insufficient search capabilities, and the points of contact for information about exclusions are incorrect. GAO also found several agencies that did not enter exclusions and others that did not check EPLS prior to making awards. Finally, GAO found that excluded parties were still listed on GSA's Federal Supply Schedule, which can result in agencies purchasing items from unscrupulous companies. To verify that no warnings exist to alert agencies that they are making purchases from excluded parties, GAO used its own purchase card to buy body armor worth over $3,000 from a company that had been debarred for falsifying tests related to the safety of its products.
The U.S. Department of Education’s OCR is a law enforcement agency. Its primary responsibility is to ensure that recipients of federal financial assistance do not discriminate—on the basis of race, color, national origin, sex, disability, or age—against students, faculty, or other individuals in educational programs and activities. OCR is responsible for enforcing the following federal civil rights laws as they relate to schools at all levels: title VI of the Civil Rights Act of 1964, which prohibits discrimination on the basis of race, color, or national origin; title IX of the Education Amendments of 1972, which prohibits discrimination on the basis of sex in education programs and activities; section 504 of the Rehabilitation Act of 1973, which prohibits discrimination on the basis of disability; the Age Discrimination Act of 1975, which prohibits discrimination on the basis of age; and title II of the Americans With Disabilities Act of 1990, which prohibits public entities from discriminating on the basis of disability. The civil rights laws OCR enforces extend to a wide range of recipients of federal funds. These recipients include all state education and rehabilitation agencies as well as nearly every school district and postsecondary school; thousands of proprietary schools, libraries, museums, and correctional facilities; and other institutions that receive federal financial assistance from Education. To ensure equal opportunity in the nation’s schools, OCR carries out its civil rights responsibilities through a variety of compliance activities. OCR’s principal activity is the resolution of discrimination complaints. Most of OCR’s staff resources are devoted to such activities as processing, conciliating, and investigating complaints. In an effort to ensure that recipients of federal financial assistance meet their civil rights compliance responsibilities, OCR also conducts compliance reviews, monitors corrective action plans, and provides technical assistance. Compliance reviews differ from complaint investigations in that they are initiated by OCR; they usually cover broader issues; and they affect significantly larger numbers of individuals. OCR selects review sites on the basis of information from various sources that indicates potential compliance problems, including survey data and information provided by complainants, interest groups, the media, and the general public. In fiscal year 1995, OCR’s staff ceiling was 833 full-time-equivalent positions and its total funding level was $58.2 million. During fiscal year 1994, about 5,300 complaints were filed with OCR. Of these, 27 percent were filed against postsecondary schools. Until fiscal year 1994, the number of compliance reviews that OCR was able to conduct was inversely related to the number of complaints received and the workload engendered. Because OCR’s complaint workload increased from fiscal years 1988 to 1993, the number of compliance reviews OCR initiated declined from 247 in fiscal year 1988 to 138, 32, 41, 77, and 101 in succeeding years. During fiscal year 1994, OCR started 153 compliance reviews, with about 25 percent directed at postsecondary schools. Of the 153, 62 percent involved race or national origin issues; 17 percent involved gender issues; 8 percent involved disability and other issues; 7 percent involved other issues; and 6 percent involved solely disability issues. In fiscal year 1995, OCR started about 100 compliance reviews. Our review of the 13 identified cases was hampered by the absence of complete documentation in OCR’s official case files. OCR has policies in place delineating the documents that should be included in the official case files in the regional offices. OCR had no similar policies with regard to the official case files in headquarters. Actions that took place in headquarters were not always documented and included in regional case files. According to OCR officials, records pertaining to headquarters activity for these 13 cases were maintained in a chronological filing system—rather than a case file system—that suited the needs of headquarters staff. The lack of documentation hindered our ability to determine the reasons for delays in completing complaint investigations and compliance reviews. Generally, while the 13 cases were in the OCR regional offices, the official case files were relatively complete, with documents periodically updated to describe investigation and review activities and the results of these efforts. When an investigation or review reached the point at which OCR headquarters became actively involved, however, the official regional files were seldom updated with pertinent notations or documents. Furthermore, official case files were not developed or maintained in OCR headquarters. As a result, we could not trace the full chronology of events for these cases by examining case files. In addition, even when the official case files were updated with documents, we could not always determine what decisions were made or why extended delays occurred because the documents often did not include such information. Because of such gaps in knowledge, the full chronology of many of the cases could not be developed. (See app. II for a brief description of each of the 13 cases; see table II.1 for a summary of the 13 cases.) Eleven of the 13 cases involved Asian-American men and women; one was a complaint by a white woman; and another was a complaint by a white man. In addition, several of the cases, although focusing primarily on Asian-Americans or Asian-Indians, also dealt with other minority groups. In analyzing how these 13 cases were investigated and resolved, we found that OCR generally followed its established policies and procedures. But OCR did not always meet timeliness standards, which is discussed in detail in appendix II. As of September 1995, two complaint investigations remained open. Of the 13 cases, only 4 were closed within OCR’s time frames. The cases that took the most time to complete were admissions compliance reviews, which generally involve complex issues and take more resources to complete, or complaints that dealt with complicated or controversial issues, such as admissions or race-targeted financial aid. Two admissions cases demonstrate the demands that individual cases can make on resources because of the volume of data that must be gathered and analyzed: (1) the compliance review of the University of California at Los Angeles (UCLA) undergraduate schools concerning discrimination against Asian-Americans and the affirmative action program, and (2) the complaint investigation of the University of California at Berkeley undergraduate programs concerning discrimination against white students. Both cases involved premier schools of the University of California system. The two schools enroll, between them, 67,000 students annually. Both investigations entailed several site visits, comprehensive statistical analyses of data for tens of thousands of applicants, and extensive interviewing and reviews of applicant files. Both schools completely changed their admissions processes during the course of the investigations, necessitating additional extensive investigation. The same regional office that conducted both investigations also completed, during the same time, a compliance review involving admission to the UCLA graduate schools. To complete that review, the regional office investigated, in detail, 40 individual admissions programs; reviewed 2,000 applicant files; and interviewed more than 200 witnesses. The demands of class admissions cases, such as these, impose unique challenges. During fiscal years 1988 to 1994, OCR’s overall workload, as well as that for complaints under title VI of the Civil Rights Act of 1964, increased. During this period, OCR resolved complaints and completed compliance reviews in less than 180 days on average. OCR does not have a standard definition of an “overage” case, but it uses 180 days as a benchmark for assessing timeliness. However, the average time to resolve complaints and complete compliance reviews concerning Asian-Americans in postsecondary schools generally was longer than the averages for cases concerning other minority groups. For complaint investigations, Asian-American cases took longer to complete, on average, than those for any other minority group. For compliance reviews, only cases involving class actions (cases affecting groups of students) and multiple title VI issues (one complaint alleging multiple issues, namely race and national origin) took more time, on average, to complete than Asian-American cases. Data indicated that this occurred partly because (1) Asian-Americans were involved with admissions cases more often than other minority groups and (2) admissions cases generally require more resources and time to complete than other types of cases. In addition, according to the data, OCR’s investigations and reviews involving Asian-Americans resulted in relatively more violation findings leading to remedial action or changes by postsecondary schools. In providing the information and statistics concerning these complaint investigations and compliance reviews, OCR cautioned that the data do not represent the various factors that may affect case resolution. These factors include the volume of data that must be collected and data analyses that must be conducted; the scope, complexity, and number of issues in a case; and the availability of information needed to resolve the issues. The statistical profile also does not reflect the extent to which any average may be unduly influenced by a single case of unusual duration. For fiscal years 1988-94, OCR completed 1,511 complaint investigations in an average of 128 days each (see table III.1). The 114 cases involving Asian-Americans took an average of 175 days to complete. In contrast, the 931 cases involving African-Americans averaged 125 days to complete, and the 165 cases involving Hispanics averaged 137 days to complete. The 106 cases for minority whites (those from Eastern Europe, Southern Europe, and the Middle East) averaged 98 days to complete. During fiscal years 1988-94, 248 of the 1,511 complaint investigations were admissions cases; that is, the complaints involved allegations that people applying for admission to postsecondary schools were turned down for discriminatory reasons. The 248 admissions cases, on average, took longer to complete—specifically, 174 days—OCR officials said and the statistics documented. The 40 admissions cases involving Asian-Americans took 297 days, on average, to complete. The 115 admissions cases involving African-Americans took 129 days, on average, to complete. The 31 admissions cases involving Hispanics took 276 days, on average, to complete. During this period, OCR took an average of 119 days to resolve 1,263 non-admissions complaints. The average time needed to resolve Asian-American non-admissions complaints was 108 days; this was quicker than the averages for complaints involving African-Americans, 125 days, and “others,” 127 days. On the other hand, Hispanics’ non-admissions complaints averaged 105 days to resolve, while minority whites’ complaints averaged 83 days. The average time to complete complaint investigations involving Asian-Americans increased during fiscal year 1994, when OCR took an average of 304 days to complete 24 investigations. Of these, eight were admissions cases, which took an average of 602 days to complete. The average time to complete complaint investigations involving admissions issues was higher for all minority groups than for investigations that did not involve admissions issues. We examined these data further to determine the extent to which the OCR investigations found violations and resulted in benefits to the complaining party or in changes by postsecondary schools to remedy violations. OCR data included four categories as benefiting the complainant or resulting in changes by the postsecondary schools: (1) remedial action agreed to by the complainant, the school, and OCR; (2) remedial action completed by the school; (3) complaint withdrawn by the complainant with changes made by the school; and (4) administrative closure by OCR after changes were made by the school. We found that of the total 1,511 cases, 214 (14 percent) resulted in findings supporting the complainants’ allegations or resulting in changes. However, for admissions cases, 58 of the 248 (23 percent) resulted in benefits or changes; while for non-admissions cases, 156 of 1,263 cases (12 percent) resulted in benefits or changes. We also examined these data according to minority groups; 22 of the 114 complaints (19 percent) filed by Asian-Americans resulted in benefits or changes (see table III.2). This was the highest percentage for any minority group. Furthermore, 16 of the 40 (40 percent) admissions cases involving Asian-Americans resulted in benefits to the complainant or changes made by the postsecondary school. This was also the highest percentage of any minority group. In summary, during fiscal years 1988-94, OCR took more time, on average, to complete complaint investigations for Asian-Americans than for cases involving other minority groups. At the same time, Asian-Americans filed a higher percentage of complaints involving admissions issues than other minority groups; these complaints resulted in benefits to the complainant or changes by the postsecondary schools in a higher percentage of cases than for other minority groups. During the first 9 months of fiscal year 1995—that is, from October 1, 1994, to June 30, 1995—OCR completed a total of 258 complaint investigations; the average time needed to resolve these cases was 121 days. Of these, 13 involved Asian-Americans and took an average of 302 days to complete. One case that took 1,776 days to complete skewed the average. In contrast, the 154 complaints filed by African-Americans took an average of 111 days to complete and the 37 complaints filed by Hispanics, 84 days. Of the 258 complaint investigations in the first 9 months of fiscal year 1995, 36 resulted in benefits to the complainant and averaged 264 days to complete. Seven of these were admissions cases; the other 29 were not. The 222 complaint investigations that did not result in benefits to the complainant took an average of 98 days to complete. Of the 13 Asian-American cases, 3 were admissions cases that resulted in benefits to the complainants. These took 73, 151, and 1,776 days to complete. The 10 other Asian-American cases that did not result in benefits to complainants took an average of 192 days to complete. See table III.3 for a complete summary, by minority group, of the complaints investigated from October 1, 1994 to June 30, 1995. For fiscal years 1988-94, OCR completed 58 compliance reviews, averaging 174 days each. The four cases involving Asian-Americans took 195 days, on average, to complete. The 23 compliance reviews involving African-Americans took 120 days, on average, to complete. The 23 compliance reviews involving class actions, however, took an average of 223 days to complete; those involving multiple title VI issues, 213 days. (See table III.4.) Of the 58 compliance reviews completed, 39 involved admissions issues. For Asian-Americans, of the four reviews, three were admissions cases. For African-Americans, 16 of 23 reviews were admissions cases, and 14 of 23 class action compliance reviews were admissions cases. As with complaint investigations, the compliance reviews involving admissions issues generally took more time, on average, to complete than the reviews involving other issues. During fiscal years 1988-94, 67 percent of the compliance reviews completed involved admissions issues; therefore, the average time to complete these compliance reviews significantly affected the average time to complete all compliance reviews. We examined these data further to determine the extent to which the OCR compliance reviews found violations and resulted in remedial action to benefit affected minority groups or changes by the postsecondary schools to remedy violations. For compliance reviews, OCR only had two categories to track these results: (1) remedial action agreed to by the schools and OCR and (2) administrative closure, with changes made by the schools. As shown in table III.5, 28 of the 58 completed compliance reviews resulted in remedial action or changes made by the postsecondary schools after violations were found. (Of the 28, only 2 were administrative closures—1 Hispanic case and 1 class action case.) Of the 39 admissions reviews, over 56 percent resulted in remedial action or change; of the 19 non-admissions reviews, about 32 percent resulted in remedial action or change. Of the four compliance reviews involving Asian-Americans, three resulted in remedial action or change; these three reviews involved admissions issues. For Hispanics, the two completed reviews, one of which was an admissions case, resulted in remedial action or change. More importantly, a high percentage of all minority groups appeared to benefit from the compliance reviews OCR conducted—especially when the focus of a review involved admissions issues. During fiscal year 1994, OCR completed four compliance reviews. Three of these involved African-Americans and one was a class action case. None of the four involved Asian-Americans. The average time to complete the four reviews was 178 days. The one review involving African-Americans that led to remedial action or change by the school took 438 days to complete. During the first 9 months of fiscal year 1995—that is, from October 1, 1994, to June 30, 1995—OCR completed 11 compliance reviews; all of these involved admissions issues, averaging 245 days each to complete. None focused on Asian-Americans; six involved African-Americans; three involved class actions; and two involved multiple title VI issues. Five of the reviews resulted in benefits to minority groups or changes by schools. These five reviews took an average of 257 days to complete. OCR considers cases that are open for 180 days or more to be “overage,” that is, to have taken too much time to complete. We compared overage data for both complaint investigations and compliance reviews as of May 21, 1993, when the current Assistant Secretary for Civil Rights assumed her position; as of September 30, 1994; and as of June 30, 1995. From May 1993 to September 1994, the number of pending complaint investigations over 180 days old declined from 167 to 122 (27 percent). In addition, the number of investigations over 500 days old declined from 77 to 34, which significantly decreased the average age of these long-term cases (see table III.6). According to OCR data, by June 30, 1995, the number of overage complaint investigations had declined to 100. Of these, 26 were over 500 days old. Of the 167 overage complaints that were pending in May 1993, 15 remained pending as of June 30, 1995. From May 1993 to September 1994, the number of overage compliance reviews increased from 10 to 18. We could not determine why this increase occurred, but it may have resulted from the increased number of compliance reviews that OCR initiated during the 1990s. Specifically, in 1990, OCR started 32 compliance reviews. In fiscal years 1991-94, the number of such reviews increased to 41, 77, 101, and 153, respectively. As shown in table III.7, as of September 30, 1994, of the 18 pending compliance reviews, 14 had been open for less than 600 days and 6 of these were less than 300 days old. As of June 30, 1995, the number of pending compliance reviews was 14 and 4 of these had been open for less than 300 days. During fiscal years 1994 and 1995, OCR implemented several administrative changes to (1) improve its operations overall and (2) revise the planning and conduct of complaint investigations and compliance reviews as well as the documentation required in the official files. These changes included revising procedures to minimize preparing unnecessary documents during investigations and reviews, delegating more authority to the regional offices for decisions on most kinds of cases, and tracking and managing active cases to help ensure that they are completed in a timely and efficient manner. In its fiscal year 1994 annual report, which was sent to the Congress in April 1995, OCR stated that to further improve operations it has initiated or implemented several other changes under four broad categories: (1) setting priorities, (2) reengineering the approach to respond to individual discrimination complaints, (3) improving technology, and (4) initiating innovative approaches to deploy OCR staff to increase efficiency and effectiveness. It is too soon, however, to determine whether the changes implemented and planned will significantly improve the timeliness, documentation, and quality of OCR’s operations over the long term. According to OCR, by focusing attention on setting priorities, it will improve timeliness and maximize the impact of available resources on civil rights in schools. To ensure that it addresses the most acute problems of discrimination, OCR will consider as broad a range of information as practical in setting priorities. OCR also stated that it will devote more resources to helping schools—as well as students and parents—learn to solve the problem of securing equal access to quality education; it will also focus on systemic education reform, which enables communities throughout the nation to understand, commit to, and implement strategies that provide opportunities for all to learn. Finally, by October 1, 1995, OCR planned to have its revised strategic plan developed, OCR officials said. Under this plan, OCR will move from using a reactive system—almost exclusively responding to complaints—to using a balanced enforcement approach—proactively targeting resources for maximum impact. To implement this approach, beginning in fiscal year 1996, OCR will work to ensure that 40 percent of its resources are dedicated to proactive measures, including priority policy development, high-impact compliance reviews, and targeted technical assistance. OCR has stated that it has fundamentally reengineered its approach to responding to individual complaints of discrimination. These changes move OCR from a required investigative approach to a flexible resolution approach. This approach is described in OCR’s updated Case Resolution Manual (CRM) issued in November 1994. CRM expanded the reasons for closing complaints and reduced paperwork by no longer requiring for each case an investigative plan, investigative report, and letter of findings (LOF). CRM introduced the concept of a case resolution letter to inform complainants of OCR’s determinations and provided that LOFs be issued only in limited circumstances; that is, in cases in which (1) a violation is found and negotiation is unsuccessful, (2) a no-violation LOF would serve an important policy function, or (3) a no-violation LOF would have the value of setting a precedent. The revised procedures also require OCR to inform affected parties in complaint cases every 60 days of the status of the cases. All regional employees have received case resolution training based on the new approach. According to OCR officials, preliminary data show improvement in case resolution timeliness and, anecdotally, in customer satisfaction. Under the new approach, OCR expects to resolve more discrimination complaints with fewer staff. Improved Technology Used When OCR’s mainframe-based case-tracking system proved inflexible for the new case resolution process, a team created a personal-computer- based system. Users and developers continue to work together to perfect the system and ensure that needed data are provided quickly and efficiently to line staff, managers, and external users. Two additional technology initiatives were started in fiscal year 1994: to network and provide electronic communication among all of OCR’s regional offices and to provide on-line access to critical case-resolution resources through an OCR electronic library. As of September 1995, of OCR’s 10 regional offices, 6 were on line and linked with OCR headquarters as part of the electronic network. OCR officials plan to have all regional offices on the network by the end of fiscal year 1996. For the staff linked through the network, OCR policies, survey information, and case-processing data are available electronically. In addition, these OCR staff can communicate with each other electronically. Eventually, OCR officials said, the public will also have access, as appropriate, to the information on the network. OCR has developed plans to redeploy staff to improve productivity. In this regard, OCR’s goals are to deliver a stronger civil rights enforcement program; focus energy on internal and external customer service; reduce formal layers of review; and assign the maximum number of staff to program activities (as an element of this plan, OCR will have at least one-third of the headquarters staff assigned to case resolution activities). In October 1993, employees in Region II (New York) began a pilot program to improve the region’s operations and service to customers. The structure in Region II had been a long-standing OCR example of a traditional hierarchial structure. Under the pilot, Region II reorganized its staff into teams to carry out OCR’s assigned responsibilities. According to OCR, this new organizational structure takes full advantage of the teamwork approach and eliminates most levels of review. The traditional regional structure involved eight or more review levels. The new structure envisions teams handling most of the work of the office, with only a few select documents being forwarded to the regional director level of review. OCR stated that the new approach emphasizes service, support, teamwork, and collegiality, within the boundaries of focused leadership, and it deemphasizes review and control approaches to management. OCR reported that Region II had accomplished major changes through its new approach of using teams. OCR established criteria for measuring success in terms of efficiency, quality of work products, and improved morale. The data collected on a pilot group and a control group showed major improvements in these areas, OCR reported. For example, the new teams approach reduced the average number of days to resolve a complaint from 169 days to 129 days, a 24 percent improvement, according to OCR. All offices started moving toward a team-based structure in September 1994. In June 1995, OCR Region VII (Kansas City) announced it had reorganized its staff into case resolution teams, similar to those in Region II, and thereby changed the way in which complaint investigations and compliance reviews are planned and conducted. OCR expects all regional offices and the headquarters office to reorganize similarly by January 1996. With respect to the specific cases involving Asian-Americans we were asked to review, OCR’s investigations of the 11 closed cases appear to be consistent with the policies and procedures in effect at that time, except for timeliness. However, because OCR’s official case files did not always record activities that took place in headquarters, we relied in part on OCR officials’ explanations of delays. OCR generally took longer to resolve these specific cases as well as other cases involving Asian-Americans than it took to resolve cases involving other minority groups. This can be explained by the relatively large number of time-consuming admissions cases, violations, and corrective actions associated with Asian-American cases. Recent administrative changes initiated by OCR appear to be at least partly responsible for improvements in OCR’s timeliness in resolving cases. However, the changes have not been in place long enough for us to assess their long-term impact on the timeliness, documentation, and quality of OCR’s investigations and compliance reviews. The Assistant Secretary for Civil Rights in the Department of Education provided written comments on a draft of this report (see app. IV). She stated that OCR’s recordkeeping procedures required that case files be maintained in the regional offices and include documents related to the investigation or review. She added that these established procedures did not require that the regional files include documentation of all case activity at headquarters. According to the Assistant Secretary, records pertaining to headquarters activity for the 13 cases we reviewed were maintained in a chronological filing system—rather than a case file system—that suited the needs of OCR headquarters staff. She stated that these records describing headquarters activity on the 13 cases were available in the chronological filing system during our review. We found that the established OCR recordkeeping procedures regarding the regional offices were as described by the Assistant Secretary and the 13 case files we reviewed were generally complete in describing case activities until OCR headquarters became involved. At headquarters, however, activities involving the cases, like teleconferences and data analysis, are not captured in the chronological files. Moreover, while documents on individual cases may be filed chronologically, the documents do not usually explain the delays. As a result, we had to rely on oral statements by OCR headquarters staff for most of the information on the chronology of events while the cases were being worked on at OCR headquarters. When provided with documents relating to OCR headquarters activities, decisions, or guidance, we considered the information in our analysis. The Assistant Secretary generally agreed with the section of the draft report that compared the timeliness and outcomes of cases involving Asian-Americans with the timeliness and outcomes of cases involving other racial groups. She pointed out that a few individual cases that took a long time to resolve could unduly skew the results of our statistical analysis of case-processing times. She also asked us to qualify parts of our report to show that OCR cases involving Asian-Americans did not always take the most time to resolve or complete and to highlight that generally for Asian-American cases, OCR found more violations which led to remedial action by postsecondary schools and benefits to the complainants. We revised our report, as necessary, to reflect the Assistant Secretary’s comments and concerns. In her comments, the Assistant Secretary stated that OCR initiated the numerous administrative changes discussed in our report to improve overall operations generally as well as case processing specifically. She noted that OCR data show that since the administrative changes were undertaken, the number and percentage of cases for all levels of education pending over 180 days have decreased, not only those for postsecondary schools. She also provided statistical evidence covering fiscal years 1990-94 to show that as a result of the administrative changes, even though the total number of complaints received and compliance reviews started have both increased, OCR has resolved greater numbers of both and in a more timely manner than in the past. Because our review focused only on complaint investigations and compliance reviews under title VI of the Civil Rights Act involving postsecondary schools, we did not revise our report to include these data on OCR’s overall operations. The Assistant Secretary also provided technical comments on specific statements and facts included in our draft report, and where appropriate we used the information to clarify and update our report. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its issue date. At that time, we will send copies to appropriate congressional committees, the Secretary of Education, and other interested parties. We will make copies available to others on request. This report was prepared under the direction of Larry Horinko, Assistant Director, (202) 512-7001; Susan Poling, Assistant General Counsel, and Laurel Rabin, Communications Analyst, also contributed to the report. For our overall timeliness examination, we analyzed computer files of all OCR complaint investigations and compliance reviews closed from October 1, 1987, through June 30, 1995, that focused on allegations of discrimination at postsecondary schools (colleges and universities) under title VI of the Civil Rights Act of 1964. We also studied OCR’s Investigation Procedures Manual, which was in effect from June 1987 until November 1993. The manual describes the procedures OCR staff are expected to follow in an investigation, including time frames for completion and the documents and records to be produced. The manual covers most case-related activities but is not intended to cover all the circumstances that could arise in the investigation of a case. Specific sections were updated periodically, as necessary. The Investigation Procedures Manual was replaced on November 30, 1993, by the Complaint Resolution Manual, which changed many of the procedures and documents to be produced. We studied the Complaint Resolution Manual as well as OCR’s updated Case Resolution Manual issued in November 1994. We also studied relevant policy documents concerning major court decisions as well as admissions and affirmative action issues in postsecondary schools. Finally, we examined the official case files, compiled and maintained by OCR’s regional offices, for 13 specific cases. We did this to determine the chronology of events while the cases were being processed, whether delays occurred during the investigations and reviews, and whether the decisions and resolutions of certain cases had a basis in policy and law. We did not substitute our judgment for OCR’s. For these 13 cases, OCR headquarters officials said no official case files had been established in headquarters, so little documentation was available when the cases were sent to OCR headquarters for additional statistical analyses, legal review, or management review. As a result, we were unable to obtain or develop a complete chronology of events for some cases after they left the regional offices; instead, we had to rely on explanations by OCR headquarters officials as to what delays occurred and which issues were under review. To determine the timeliness and outcomes of OCR’s complaint investigations and compliance reviews for Asian-Americans as compared with other minority groups, we obtained data tapes and printed reports from OCR covering fiscal years 1988 through 1995. These tapes summarized the data according to minority groups or other categories of cases, such as class action and multiple title VI cases. Our study included closed and pending cases for each fiscal year and presence or absence of violations of nondiscrimination laws for the closed cases. We used this information to determine the cases that resulted in (1) benefits to complainants or minority groups or (2) changes by postsecondary schools to their affirmative action programs or to their policies and procedures to remedy violations. OCR headquarters officials provided us with various manuals, policies, and procedures, which had been developed from May 1993 through June 1995, after the current Assistant Secretary for Civil Rights was appointed. She changed many administrative practices affecting how OCR carries out its complaint investigations and compliance reviews. Some of these policies and procedures have been implemented; others are still in the planning stages. To determine whether the administrative changes would improve OCR operations in conducting complaint investigations and compliance reviews, we studied the documents provided and considered the explanations of OCR officials. Our work was conducted from March 1994 to August 1995 in accordance with generally accepted government auditing standards. This appendix includes brief descriptions and chronologies of the 13 cases that Representative Rohrabacher asked us to review and, to the extent that information was available, why OCR’s investigations and reviews were delayed. The information presented is based on available documentation in OCR case files and comments and explanations made by OCR officials. The type of case, the date the complaint investigation or compliance review was opened, and the date a letter of findings (LOF) was issued or the case was closed or whether the case was still pending are given in table II.1. Total time to respond (months) In addition, this appendix provides information on specific issues: (1) the circumstances that caused OCR to revise its findings of discrimination 3 years after the original LOF was issued in regard to the University of California at Los Angeles (UCLA) graduate mathematics program (case no. 09-89-6004); (2) whether OCR followed established policies and procedures in reaching its no-violation decision regarding the University of California (UC) at San Diego case (case no. 09-92-2002); and (3) whether OCR’s decision to administratively close the Santa Clara University School of Law case (case no. 09-93-2027) was consistent with established policy. In conducting the case file reviews, we focused our attention on whether OCR’s decisions were based on law and policy, but we did not substitute our judgment for that of OCR. We also provide information on our review of the other cases that were administratively closed and whether OCR followed its policies and procedures with regard to time frames. On October 19, 1989, Representative Rohrabacher, Representative Gingrich, and Mr. Duncan Hunter, Chairman of the Republican Research Committee, wrote to the Department of Justice about the admissions program at Boalt Hall, the law school of the University of California at Berkeley; Justice referred this letter to OCR on October 26, 1989. OCR provided its report on the case to the requesters on April 4, 1990, and informed them that OCR would conduct a compliance review based on the information collected. According to an OCR regional official, this case involved complicated legal issues with a race-based waiting list and preliminary documents raised serious questions about compliance. The OCR regional office conducted its review and submitted to OCR headquarters a draft investigative report and draft LOF in November 1990. The regional office case file did not document events from the November 1990 submission to headquarters to the signing of the voluntary compliance and settlement agreement on September 25, 1992. From November 1990 to 1992, headquarters had concerns about the statistical analyses and there were numerous discussions about all aspects of the case. OCR officials stated that the region began settlement negotiations in January 1992. OCR officials also stated that during this time Boalt Hall was in transition with a newly appointed dean. As a result, it was 26 months from when the compliance review was initiated until the voluntary compliance and settlement agreement was signed. OCR’s procedures at that time stated that an LOF should be issued within 90 calendar days from the date of the first site visit. Since November 1990, Boalt Hall has (1) revised its admissions and waiting list procedures and (2) submitted required annual reports to OCR describing how these changes have been implemented. After receiving the third annual report in November 1994, OCR declared that Boalt Hall was in compliance and OCR monitoring and activities would cease. In January 1988, OCR regional staff began a compliance review of admissions practices of all 84 departments with graduate programs at the UCLA. UCLA was targeted because preliminary information indicated that although UCLA had a large number of Asian-American applicants, the overall admission rate for Asian-Americans was lower than the overall rate for whites in many programs and because the Department of Justice had received a number of inquiries concerning the University of California system. Each graduate department had its own admissions policy. After obtaining preliminary information and analyzing computerized data on all departments, OCR targeted 40 departments for in-depth file reviews based on statistical analysis of admission rates and grade point averages, and other possible indicators of discrimination. From the beginning, data collection was a problem because not all departments had retained 3 years of admissions data. OCR headquarters officials were involved in the decisions on the scope and approach of the compliance review from the start. OCR officials stated that OCR had not previously undertaken an admissions review comparable in magnitude to the UCLA admissions review, and a number of approaches and means of resolution were explored during the review. Documents indicate that throughout this review, many differences had to be worked out between OCR headquarters and OCR regional staff. These differences included the targeting of departments, the comparison of Asian-American and white admissions, and whether violations were found during the investigation of the 40 different admissions programs targeted for in-depth review. OCR’s first site visit was in April 1989, more than a year after it informed UCLA that it would be initiating a compliance review. During that year, OCR set out the scope of the review, identified the information UCLA had available, and identified how admissions decisions were made for individual graduate programs. OCR officials noted that the review was extensive and included a review of 84 graduate programs, not just the Mathematics Department eventually cited. In its LOF of October 1, 1990, OCR found UCLA in violation of title VI of the Civil Rights Act of 1964 because of its admissions practices for the graduate Mathematics Department. In particular, OCR found that the department had discriminated against five Asian-American applicants who, if provided equal treatment under admissions standards articulated by the department, should have been accepted. OCR deemed UCLA’s three different explanations of admissions decisions given over more than a year to be pretext for discrimination. UCLA disagreed with OCR’s findings. UCLA asserted that OCR (l) misunderstood the department’s initial evaluation rating system, which was just a recommendation to the vice-chair, and (2) failed to interview the vice-chair who actually made the admissions decisions but was on sabbatical when OCR first visited the Mathematics Department in 1989 and 1990. UCLA expanded the statistical analysis and produced statistics showing no difference in admission rates for whites and Asian-Americans for numerical applications when they were grouped with ratings of “3.0 and above” and “below 3.0.” OCR had limited its comparison to a group of whites who had been admitted and a group of Asian-Americans who had been denied admission. In UCLA’s expanded group comparison, UCLA showed that there were 22 white applicants in the same rating range (that is, ratings of 2.4 and above) as the three OCR-identified Asian-Americans who were denied admission based on the use of the same criteria. UCLA maintained that three admitted whites in that group had substantially higher academic qualifications than the three rejected Asian-Americans OCR identified. OCR based its violation LOF partially on the fact that the different explanations by UCLA officials regarding admissions decisions were a pretext for discrimination. Just days before the LOF was issued, OCR officials learned that the vice-chair who had actually made the admissions decisions had not been interviewed; UCLA’s first and second explanations concerning admissions to the Mathematics Department program were provided by officials who knew little about the actual admissions criteria used. OCR interviewed the vice-chair before the LOF was issued, but found that his explanations could not fully account for all admissions decisions. OCR issued the LOF without bringing its concerns to UCLA’s attention for further explanation. Later investigation showed that OCR staff placed great importance on the numerical ratings developed by the Mathematics Department’s Admissions and Support Committee. But, in fact, admissions committee members would rate candidates as “admit Ph.D.” despite numerical ratings below that required for admission to the department. The regional office continued its negotiations with UCLA and conducted a post-LOF site visit, including examination of the admissions files, on February 27, 1991, 4 months after the LOF was issued. This review of the files was more comprehensive than any prior review. In particular, the review was expanded to consider unsuccessful white applicants and successful Asian-American applicants. OCR found that the admissions decisions were cumulative in nature, with various objective and subjective factors weighed against each other by the vice-chair. OCR also found that overall undergraduate grade point average was of little or no consequence, although it was used in the ratings. The grade point average for math courses was pertinent, and grades received in particular math courses were very important. The applicant’s “statement of purpose” was also important because the department rejected applicants who suggested that their ultimate career goals were outside math. In addition, applicants from less renowned schools were at a competitive disadvantage. They needed strong letters of recommendation from professors known to UCLA faculty. The supplemental investigation showed that OCR had not fully understood the criteria it was given by UCLA officials in September 1990. For example, one of the criteria given was that an applicant’s stated interest in applied mathematics would enhance the applicant’s position. The October 1990 LOF stated that OCR’s examination of files had not verified this criterion. However, during the supplemental examination, OCR discovered that the boost was not for all candidates interested in applied math, but only for certain subareas, particularly for applicants in computational fluid dynamics and those already working in the defense industry. Also, the supplemental examination found that master’s degree applicants were not held to the same standard as Ph.D. applicants by the department. The regional office found at the outset that it had received the wrong information from university and Mathematics Department officials. In reexamining files and expanding the examination to files of lower ranked Asian-Americans admitted, OCR found that lower ranked Asian-Americans also benefited from the application of subjective admissions criteria. Further review showed only two possible examples of discrimination. Both of these involved students within the range of white applicants admitted and white applicants rejected. Both cases of possible discrimination were vulnerable to being rejected, one because the applicant had a lower quantitative Graduate Record Examination (GRE) score by a substantial degree than anyone admitted and the other because the applicant had a combination of low GRE scores, a degree from an unknown school, and a stated interest in obtaining a certified public accountant license, a career goal outside mathematics. The regional office submitted a revised investigative report to headquarters on July 23, 1991, in which it concluded that UCLA’s Mathematics Department was not in violation of title VI and recommended the withdrawal of the violation LOF. On December 26, 1991, the Deputy Assistant Secretary for Policy concurred and suggested revisions of the draft investigative report to the regional office. The next 20 months were spent by the regional office and headquarters exchanging drafts of the revised LOF. On August 8, 1993, OCR issued a revised LOF concerning the Mathematics Department. It stated that because of new evidence, OCR had revised its original findings and no violation had been found to have occurred. However, OCR required the Mathematics Department to keep records of its admissions decisions for the 1994-95 academic year. Under its required time frames, OCR should have issued its LOF within 90 days of the first site visit and initiated formal enforcement action within 180 days. However, OCR did not issue its LOF until 18 months after its first site visit in April 1989 and never initiated formal enforcement action. This compliance review was initiated for UCLA’s undergraduate schools in January 1988 because of the same factors taken into account in initiating the compliance review of UCLA’s graduate programs (see the previous case). OCR headquarters was involved in this review from the start. During this review, OCR had continuing problems obtaining usable data from the university. For example, OCR originally requested 5 years of admissions data, but UCLA could only provide data for 2 years. The data tapes UCLA provided were not compatible with OCR’s system. Although the statistical analyses division in OCR headquarters first became involved with the university’s data in 1989, it could not complete its work until early 1993. According to OCR, data analysis was hindered because (1) UCLA originally sent hard copy, which proved insufficient, instead of computer tapes; (2) UCLA objected to providing certain data; and (3) the data could not be interpreted without obtaining the master files from UCLA and identifying and sorting the codes and variables. Because of the enormous number of admissions applications processed each year by UCLA, the data were extensive and time-consuming to analyze. After the OCR regional office completed its site work in April 1989 and drafted its investigative report, UCLA changed its admissions policy but did not inform OCR immediately. OCR then reinterviewed university officials and prepared a revised draft investigative report. UCLA again changed its admissions policy in 1990. As a result, OCR had to request updated data from UCLA for 2 additional academic years. Because of the various factors affecting this case, the investigative plan for this review was not made final until January 1990—2 years after the review started. From January 1990 through late 1993, OCR undertook investigative work, statistical analyses, and legal analyses in both the region and headquarters. In November 1993, a draft investigative report on the UCLA School of Letters and Science was prepared, but it was never made final or sent. In February 1994, OCR sent a letter to UCLA requesting additional data, but UCLA did not provide the data within the time frames set out by OCR. OCR ultimately determined that additional data and analysis were not needed to reach a resolution of the case. In August 1994, the region sent a draft LOF to OCR headquarters for review. In September 1995, OCR issued a no-violation LOF to close the case. OCR found that UCLA had not (1) established quotas or admissions limits for Asian-American applicants or (2) discriminated against Asian-American applicants. OCR also determined that UCLA’s affirmative action plan complied with title VI. From the date the case was opened in January 1988 until it was closed in September 1995, 92 months elapsed, making this the lengthiest of the 13 cases that Representative Rohrabacher asked us to review. A white woman alleged in May 1992 that the City University of New York (CUNY), York College discriminated against her on the basis of race because she was denied admission to the licensed practical nurse to registered nurse articulation program (referred to as the LAP program). The LAP program is part of the Collegiate Service and Technology Entry Program, a New York State program authorized by law to increase the enrollment and retention of economically disadvantaged or minority students in programs that lead to professional licensure and employment in scientific, technical, health, and health-related professions. By law, eligibility is limited to New York State residents who meet those qualifications. Also, a potential applicant seeking enrollment in the LAP program must meet several requirements dealing with licensure, testing, nursing experience, and basic skills; the applicant must also be either from a designated minority group (African-American, Hispanic, Native American, or Alaskan native) or meet the economic eligibility criteria. OCR’s investigation, begun in June 1992, revealed that the complainant contacted the college in early May 1992 and requested information about the LAP program. The complainant later contacted the LAP program director and was informed of the admissions criteria. After talking to the complainant, the program director determined that she was not eligible economically or under the minority criterion. OCR’s investigation showed that the complainant did not submit a written application. Title VI of the Civil Rights Act of 1964 allows for consideration of race in admissions policies and programs when race is not the sole criterion. Admissions programs in which economic disadvantage and race are two of the possible criteria for admission have been held valid under title VI. Accordingly, OCR found that CUNY was in compliance with federal law with respect to the issue. All work on this case was done by OCR Region II (New York) staff; and although the case was open from June 1992 until January 1993—about 7 months—it had been “tolled” from July 29, 1992, until October 8, 1992, while OCR waited for CUNY to provide detailed admissions data. That is, the case was kept open, but the time frames were suspended pending the delivery of the requested data. OCR met its time frames for this case in accordance with its Investigative Procedures Manual. Representative Rohrabacher filed this complaint in December 1992 based primarily on an article in a San Jose, California, newspaper in May 1991. Representative Rohrabacher’s complaint referred to a commentary, written by the dean of Santa Clara’s Law School, and alleged that the admissions standards for the 1990 entering law school class were substantially different for different races. Representative Rohrabacher alleged that the law school appeared to have a track system of admissions that insulated some applicants, on the basis of race, from competition with other applicants. OCR acknowledged receiving the complaint letter on December 16, 1992, and asked Representative Rohrabacher to provide additional information about the alleged discrimination; OCR noted that the complaint would be closed in 45 days if additional information was not provided. None was provided, and OCR subsequently closed the case administratively, that is, without investigation, on February 19, 1993. Before closing the case, OCR reviewed the news item Representative Rohrabacher had attached for facts to support his statements that (1) the admissions standards substantially differed for different races and (2) Santa Clara has, in effect, a track system that insulates some applicants from competing with others. OCR noted that the article reported the grade point averages and Law School Admission Test (LSAT) scores in which composite scores for two minority groups were lower than those for the class as a whole. OCR found that those statistics did not provide sufficient basis for it to identify an issue of discrimination under the laws OCR enforces. OCR issued a policy interpretation that explains that affirmative action programs in admissions cannot have set-asides based on race or ethnicity. However, OCR also stated that race could be used as a “plus” factor in admissions processes and that nothing in the article gives evidence of a quota system, a track system, or a cap by group. OCR also followed its Investigative Procedures Manual section I.A.4(a), which listed the elements of a “complete complaint.” A complete complaint includes (1) description of the discrimination alleged to have occurred, (2) some indication of the factual bases for a complainant’s belief that the discrimination has occurred, and (3) sufficient detail to enable OCR to identify the issues raised under the laws it enforces. OCR did not find the news item to contain sufficient detailed information. OCR officials did not communicate with Representative Rohrabacher or his staff, other than through these two letters, and received no additional information concerning this complaint, according to OCR officials. Representative Rohrabacher filed this complaint in October 1991 with OCR, partly on the basis of a San Diego newspaper article dealing with eight Filipino-American high school students from California who had problems gaining admission to UC San Diego. Representative Rohrabacher charged that it appeared that about 40 percent of the places in the freshman class were reserved for applicants of certain races, while applicants of other races, including Filipino-Americans, were excluded from competing for those places. He added that this seemed to be a quota based on race that illegally discriminated against Filipino-Americans and possibly applicants from other races. OCR began its investigation in October 1991 and followed its standard investigative procedures, including time frames found in its Investigative Procedures Manual, in acknowledging the letter, developing an investigative plan, conducting its investigation, and drafting its investigative report. On April 3, 1992, the draft investigative report was submitted to headquarters for review. Although headquarters review was not standard practice at that time, the cover note from the regional director indicates that the issues raised in the complaint involved OCR’s fiscal year 1991 national enforcement strategy issues. In addition, admissions questions dealing with affirmative action are more sensitive than most other issues, according to the note. The policy unit at headquarters prepared a memorandum on the investigative report and forwarded the case to the Deputy Assistant Secretary for Policy on July 17, 1992. The regional office chronological file indicates some conversations between headquarters and regional staff in August 1992, but there is no other record of actions on the case until April 1993. The case file was apparently “lost” in the Deputy Assistant Secretary’s office for 10 months, from summer 1992 to April 1993, OCR headquarters staff stated. The OCR tracking system at that time assigned deadlines until cases reached the Assistant Secretary’s or Deputy Assistant Secretary’s office but did not track cases or assign deadlines in those offices. After the case resurfaced in April 1993, the policy unit again reviewed it and drafted another memorandum, but no further progress occurred until November 1993, when headquarters staff provided oral comments to the regional office on the draft investigative report during a conference call. A no-violation LOF was issued within 3 months, but that was almost 2 years after the investigative report was sent to headquarters from the regional office. From the time the case was first submitted to headquarters in April 1992 until the LOF was issued, more than 23 months had elapsed: about 3 months was attributable to the regional office and 20 months to headquarters. But OCR’s Investigative Procedures Manual at that time stated that the LOF should be issued within 135 calendar days. OCR’s investigation found no evidence that the university’s admissions system used for fall 1991 operated as a quota system, nor did it find that the university reserved 40 percent of its places for students of a particular race or national origin. OCR found that one aspect of the appeals process used in the admissions system in 1991 was inconsistent with OCR’s policy interpretation because the appeals process was not narrowly tailored. However, the university had already modified this admission appeals process before OCR completed its investigation. OCR also examined whether Filipino-American students were affected by this admissions appeals process. It found only one student who potentially was adversely affected. OCR determined that this student did not meet the minimum requirements for admission and that his chances of success at the university were so unlikely that further review was not warranted. The official file for this case included pertinent documentation from October 1991 until April 1992, when the regional office staff did their work. After the case was forwarded to headquarters, few documents were included in the files and little information was included in the official case file to show the issues that headquarters staff were considering. An Asian-Indian man alleged discrimination on the basis of national origin because the University of Texas had failed to give equal consideration to Asian-Indian applicants, as compared with consideration given to African-American and Hispanic applicants, in admission to the School of Law. The complainant had a 3.5 grade point average in college, an LSAT score that placed him in the 68th percentile, and had worked as an intern in the district attorney’s office in Harris County, Texas. The complainant filed his complaint after applying to the law school and being rejected for admission twice. OCR Region VI staff initiated an investigation in November 1992 and obtained information from the complainant and the university during January 1993. OCR was advised of a pending class action suit against the university in February 1993. OCR determined that the class action suit involved the same issues as those in the charge filed with OCR by this complainant even though the complainant was not a party to the suit. Therefore, in accordance with its Investigative Procedures Manual section IV.B.2(b), OCR advised the complainant in May 1993 that its investigation was being tolled until the litigation was resolved. That is, the case would be kept open, but the time frames were suspended pending the outcome of litigation. In November 1993, OCR revised its investigative procedures. Under the new procedures, complaints that involve issues in pending litigation cases are now closed and the complainant is informed that he or she may refile the complaint following termination of the court proceeding. In mid-January 1994, OCR sent a letter to the complainant informing him of the scheduled trial date and advising him that the case was being closed. The complainant was also informed that he could refile his complaint within 60 days following the termination of the court proceeding if there was no decision on the merits or settlement of the complaint allegations. This accords with the revised procedures found in the Case Resolution Manual, section I.H.5. The complainant did not refile his complaint. A Chinese-American woman filed a complaint in May 1988 against UC Berkeley alleging that she had been discriminated against on the basis of national origin because she had been denied admission to the School of Optometry. OCR Region X (Seattle) worked on the case for about 10 months. In March 1989, it sent a letter to the complainant, advising her that on the basis of the evidence gathered during the investigation, OCR did not anticipate that it could substantiate the complainant’s allegations of discrimination. This letter was not an LOF, and the complaint was not closed at this time. Instead, because of questions raised regarding the School of Optometry’s affirmative action program during the investigation, headquarters directed Region X in July 1989 to investigate the affirmative action plan in the School of Optometry. Headquarters indicated Region X could either issue a partial LOF on the individual complainant’s facts or address all issues in a single LOF. Region X chose the latter option. OCR performed a statistical analysis of 1988 admissions data, but OCR headquarters later decided to also review 1989 and 1990 admissions data. The region conducted an additional site investigation and submitted a draft investigative report and LOF to headquarters on October 9, 1991. Headquarters conducted additional statistical analyses, held several conference calls with the regional office, and reviewed applicant files that it had obtained from the region. On January 6, 1994, headquarters returned the case to the regional office with comments, and on February 17, 1994, the final LOF was issued. OCR exceeded its established time frames for this case. The OCR standard in effect at the time the case was initiated was that an LOF be completed within 105 calendar days; this investigation took about 69 months to complete. OCR officials explained that much of the case-processing time was associated with extensive statistical analyses of the affirmative action issue and the issue of possible discrimination against Asian-Americans as a class, with data covering a 3-year period. A white male veteran alleged in July 1992 that the University of Hawaii at Manoa had discriminated against him on the basis of race by denying him admission to its law school. The complainant alleged that places were set aside for particular minorities and that the minorities admitted to the law school had lower qualifications than the nonminorities rejected. The complainant objected to the university’s preadmissions program, which accepts 12 students from among disadvantaged applicants or ethnic groups underrepresented in the Hawaii Bar for a 1-year program. The complainant further claimed that his “unique veteran experiences” should be considered in offsetting his relatively low academic standing and application test scores. In the course of initiating its investigation on August 13, 1992, OCR learned that the complainant had filed suit in U.S. District Court in Hawaii on July 7, 1992. An OCR representative informed the complainant that OCR’s procedure is to defer its investigation until litigation that concerns the same allegations is resolved. OCR tolled the case from August 25, 1992, until February 18, 1993. In January 1993, the court dismissed the case because the plaintiff (that is, the complainant) failed to show that his rejection was the result of the preadmissions program. The court found that the plaintiff simply did not meet the university’s law school admissions criteria. His grade point average and LSAT score were below the median, that is, far below those of other accepted applicants. No one, including those admitted under the preadmissions program, had an LSAT score as low as the plaintiff’s. Furthermore, he was from a noncompetitive school. Two months later, on March 11, 1993, OCR administratively closed the case. Under OCR’s Investigative Procedures Manual, a case should be closed if OCR (1) obtains information indicating that the issue raised has been resolved in a manner consistent with title VI of the Civil Rights Act and (2) determines that there are no remaining issues appropriate for investigation. Section IV.A.2(d) of the manual states that cases in which the same issues involving the same complainant have been subject to a decision by a federal court may be closed. OCR actually closed the case under section IV.A.(2)(g), which states that if OCR obtains information indicating that the issues raised by the complaint have been resolved, OCR should determine if there are current issues appropriate for investigation; if not, the case should be closed. OCR determined that the issues raised in the OCR complaint had been resolved in accordance with title VI standards and that there were no outstanding issues in the complaint that had not been addressed. OCR officials indicated that the case was closed because (1) the judge determined that the complainant lacked standing because of low LSAT scores and a poor academic record to challenge the preadmissions program and (2) this was an individual complaint. Although OCR could have continued the class issue of whether the preadmissions program violated title VI, it was not required to do so. The complainant had not made any specific allegation on behalf of individuals other than himself. OCR did not reach any conclusion regarding whether any admissions program was legal or illegal. OCR officials stated that the allegations the complainant presented were insufficient to raise a class issue by themselves or to show that a practice existed that was discriminatory. OCR officials stated there were no unresolved issues appropriate for investigation. A Chinese-American woman applied to the University of California at Davis’ medical school and was denied admission even though she had a 3.94 grade point average, had participated in many extracurricular activities, and had received several awards. She alleged that the medical school discriminated against her because she was Asian-American. OCR’s regional office investigated the allegations from November 1991 to April 1992, drafted an investigative report, and forwarded it to OCR headquarters for review. From April 1992 through November 1992, additional statistical data on admissions to the medical school were requested and analyzed at OCR headquarters. From November 1992 until May 1993, there was no apparent activity in the case. During summer 1993, another draft investigative report was prepared. In November 1993, during a telephone conference call between OCR regional staff and OCR headquarters officials, the final issues of this case were worked out; shortly afterward, a draft LOF was prepared and submitted to OCR headquarters for review in January 1994. The LOF was issued on March 21, 1994. Part of the delay in closing this case occurred because the Deputy Assistant Secretary was concerned about the affirmative action plan at the university; he wanted to make sure that the plan had not influenced the university’s decision to reject the complainant, OCR headquarters officials explained. The case file included complete documentation and explanations of case activity from when the complaint was filed until November 1992. However, the official case file, which is kept in the regional office, included no other documents until the no-violation LOF was issued in March 1994. OCR exceeded the established time frames for this case. The standard in effect at the time the case was initiated was that an LOF be completed within 135 calendar days; this investigation took about 28 months to complete. OCR officials noted that much of the length of this case is attributable to the complexities and sensitivity of the affirmative action issues and the extensive statistical analysis that was conducted. A journalist filed complaints during 1989 with OCR about UC Berkeley, Harvard, and UCLA; each was a separate OCR case. OCR was already investigating admissions programs at Harvard and UCLA. In the Berkeley case, the complainant charged that too many underrepresented minorities, Asian-Indians, and Filipinos were being admitted to UC Berkeley and too few qualified Asian-Americans and whites were being admitted. He criticized the university’s affirmative action program. He also alleged that underrepresented minority students were being segregated into the UCLA and UC Berkeley campuses and away from the other UC campuses. Originally, the investigation initiated in May 1989 was to cover the academic years beginning in 1987, 1988, and 1989. As time went by, however, additional years were added to the investigation because the university changed its admissions policies and OCR’s preliminary findings were no longer current. According to OCR, obtaining usable data from the university was also a problem throughout the investigation. Over time, OCR conducted 10 site visits. In addition to the on-site work done by the OCR regional staff, the OCR headquarters surveys and statistical support branch, beginning in August 1991, analyzed university data on several occasions and issued two reports summarizing its work. The case file showed no activity on the case from August 1993 until July 1994. In July 1994, OCR requested more data from the university. In October 1994, OCR wrote a follow-up letter to the university again requesting data. As of September 1995, this case was still open. OCR officials told us that substantive changes occurred in the admissions policy in 1990, 1991, 1992, and 1994. OCR conducted additional on-site interviews to obtain clarification of the admissions changes taking place. An Asian-Indian man filed two complaints with OCR after being denied admission to the University of Wisconsin at Madison’s Law School in 1991 and 1992. He alleged that the university had discriminated against him and other Asian-American applicants for its Legal Education Opportunity Program (LEOP) because other minority groups were automatically eligible whereas Asian-Americans were not. LEOP offered special admissions and need-contingent, race-targeted financial aid. The case file for the first complaint included data that were obtained during OCR’s investigation from April 1991 to March 1992. This investigation was still in progress when the second complaint was filed in July 1992. The case file for the second complaint included data obtained during OCR’s investigation from July 1992 to October 1992. No documents appeared in the case file from October 1992 until August 1994. On August 11, 1994, OCR issued a closeout letter to the complainant, which broke down the two complaints into three issues: (1) complainant was denied admission to law school in February 1991 because of race, national origin, and retaliation; (2) complainant was denied admission to law school in February 1992 because of race, national origin, and retaliation; and (3) LEOP denied Asian-Americans automatic consideration for financial aid and admission. In the letter, OCR stated that it had found insufficient evidence to support the first two individual allegations but that it would make a separate determination on the third allegation, which is a class issue. OCR officials said that these cases were delayed because they dealt with race-targeted financial aid issues, which OCR was in the process of reexamining. OCR headquarters officials explained that the OCR regional office was directed to hold its LOF until the policy statement was issued. This directive was later communicated orally, so no documents were included in the case files, an official said. Although OCR conducted part of the investigation in 1991 and 1992, OCR waited until the policy statement on race-targeted financial aid became effective in May 1994 to finalize its investigation. The LOF on the first two allegations was issued less than 3 months after the policy guidance took effect. The class issue has taken more time. OCR decided that additional facts were needed to determine if LEOP complied with title VI in light of the new guidance. In March 1995, OCR requested more data from the university on the class issue, and in July and August 1995, the university submitted additional data. As of September 1995, the case was still open. An Asian-American man filed a complaint on behalf of his son, alleging that the Massachusetts Institute of Technology (MIT) discriminated against Asian-Americans by admitting less qualified applicants from other races and nationalities. The complainant cited a newspaper article that reported how five poor Hispanic students from Texas had been accepted by MIT and provided details of their high school grades and Scholastic Aptitude Test (SAT) scores. The complainant also contended that Asian-Americans as a class were treated differently in the admissions process and believed that MIT had set a quota on the number of Asian-Americans that would be accepted. Beginning in June 1993, OCR investigated the complaint through reviewing pertinent documents and records and interviewing various involved parties. OCR found no violations and issued its LOF on April 22, 1994. OCR exceeded the established time frames for this case. The standard in effect at the time the case was initiated was that an LOF be completed within 135 calendar days of when a complete complaint was filed; this investigation took about 18 months to complete. The case file did not include information to explain (1) the delay between when the complaint was filed and when the investigation began and (2) the reasons it took so long to complete the investigation and issue the LOF. OCR officials told us that some of the delay in initiating the investigation was attributable to the region’s efforts to coordinate the investigation of this case with other admissions cases that had been filed in Region I (Boston) and Region II (New York). Officials also told us that the investigation needed to be carefully planned to avoid the extraordinary consumption of resources that a similar investigation at Harvard University had entailed. Average time (in days) Table III.3: Title VI Complaint Investigations Resolved, by Minority Group time No. No. No. No. Those from Eastern Europe, Southern Europe, and the Middle East. Multiple title VI cases are those that include more than one title VI issue, that is, one case may include allegations about both race and national origin discrimination. The following are GAO’s comments on the Department of Education’s letter dated September 26, 1995. 1. Our point in this section was that OCR did not have a complete official file for every case that included documentation on all phases of a complaint investigation or compliance review, including actions and decisions by OCR headquarters officials. Education, in its comments, said that records pertaining to OCR headquarters activity in the 13 cases were maintained in a chronological filing system, rather than a case file system, that suited the needs of headquarters staff. At headquarters, however, activities involving these cases, like teleconferences and data analysis, are not captured in the chronological files. Moreover, while documents on individual cases may be filed chronologically, the documents did not usually explain delays. As a result, we had to rely on oral statements by OCR headquarters staff for much of the information on the chronology of events while the cases were worked on in OCR headquarters. 2. We acknowledged that for some cases, documents prepared by OCR headquarters were sometimes included in the regional office files. As we reported, however, often the actions, decisions, and deliberations that occurred in headquarters that led to the issuance of a letter of findings or other documents reflecting OCR’s official position on an issue were not included in the case files made available to us. Furthermore, reasons for delays of investigations and reviews were seldom documented at OCR headquarters; therefore, we had to rely largely on oral statements by headquarters officials for this information. 3. We agree that individual cases that took a long time to resolve would skew OCR’s average time for completing complaint investigations and compliance reviews. We also acknowledge that in reporting information and statistics on OCR’s timeliness in resolving its cases, we do not fully discuss all the factors that may affect the resolution of each case; for example, the legal complexities of a precedent-setting case or the great amount of analysis necessary in an admissions case. (See pp. 6 and 9.) 4. A paragraph discussing these data was added. (See p. 7.) 5. The caption on page 8 was revised. 6. Two sentences were added on page 9 to include additional information. 7. Our review dealt only with complaint investigations and compliance reviews in postsecondary schools that involved issues concerning title VI of the Civil Rights Act of 1964. There was no need for revisions. 8. The report was revised. (See p. 2.) 9. No revision was needed. 10. The report was revised to include Education’s comment. (See p. 2.) 11. The report was revised to include updated information. (See p. 3.) 12. The report was revised to include correct percentages. (See p. 4.) 13. The report was revised to include updated information. (See p. 4.) 14. The report was revised to include the correct definition. (See p. 6.) 15. The report was revised to reflect Education’s comment. (See p. 11.) 16. The sentence was deleted because of updated information. (See p. 11.) 17. The report was revised because of updated information. (See p. 12.) 18. The report was revised to include additional information. (See p. 13.) 19. The report was revised to include updated information. (See p. 13.) 20. The report was revised. (See p. 14.) 21. The report was revised to include additional information. (See p. 24.) 22. The report was revised to include additional information. (See p. 24.) 23. The report was revised to include additional information. (See p. 28.) 24. No revision was needed. 25. No revision was needed. 26. The report was revised. (See p. 40.) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Education's Office for Civil Rights' (OCR) handling of discrimination cases involving Asian-Americans who applied to or were enrolled in postsecondary schools, focusing on: (1) whether OCR followed established policies and procedures in conducting complaint investigations; (2) a comparison of the timeliness and outcomes of complaint investigations and compliance reviews for fiscal years 1988 through 1995 involving Asian-Americans and other minority groups; (3) whether recent OCR administrative changes improved its performance and resolution of complaint investigations and compliance reviews. GAO found that: (1) OCR generally followed its established policies and procedures in the 13 cases reviewed, but 7 of the 11 resolved cases exceeded OCR time frames for resolution and OCR official case files did not adequately reflect headquarters actions; (2) although its staffing levels and other resources remained stable while discrimination complaints increased, OCR averaged less than 180 days in revolving most of its general workload; (3) OCR usually took more time on average to resolve Asian-American cases than other minority cases; (4) the longer resolution periods for Asian-American cases were partially due to the high percentage of cases involving admission issues and greater number of violations per case, which took longer and more resources to resolve; (5) the resolution of Asian-American cases resulted in more corrective actions and changes made by postsecondary schools; (6) OCR has initiated administrative changes to improve its performance and resolution of complaints and compliance reviews which include setting priorities, revising procedures to respond more flexibly to complaints and reduce unnecessary documentation, increasing its use of personal computers and networks to track cases, and redeploying staff to improve productivity; and (7) although OCR has reduced its case backlog, it is too soon tell if these administrative changes will significantly improve OCR handling of complaint investigations and compliance reviews over the long term.
Each weekday, 11.3 million passengers in 35 metropolitan areas and 22 states use some form of rail transit (commuter, heavy, or light rail). Commuter rail systems typically operate on railroad tracks and provide regional service between a central city and adjacent suburbs. Commuter rail systems are traditionally associated with older industrial cities, such as Boston, New York, Philadelphia, and Chicago. Heavy rail systems— subway systems like New York City’s transit system and Washington, D.C.’s Metro—typically operate on fixed rail lines within a metropolitan area and have the capacity for a heavy volume of traffic. Amtrak operates the nation’s primary intercity passenger rail service over a 22,000-mile network, primarily over freight railroad tracks. Amtrak serves more than 500 stations (240 of which are staffed) in 46 states and the District of Columbia, and it carried more than 25 million passengers during FY 2005. Certain characteristics of domestic and foreign passenger rail systems make them inherently vulnerable to terrorist attacks and therefore difficult to secure. By design, passenger rail systems are open, have multiple access points, are hubs serving multiple carriers, and, in some cases, have no barriers so that they can move large numbers of people quickly. In contrast, the U.S. commercial aviation system is housed in closed and controlled locations with few entry points. The openness of passenger rail systems can leave them vulnerable because operator personnel cannot completely monitor or control who enters or leaves the systems. In addition, other characteristics of some passenger rail systems—high ridership, expensive infrastructure, economic importance, and location (large metropolitan areas or tourist destinations)—also make them attractive targets for terrorists because of the potential for mass casualties and economic damage and disruption. Moreover, some of these same characteristics make passenger rail systems difficult to secure. For example, the numbers of riders that pass through a subway system— especially during peak hours—may make the sustained use of some security measures, such as metal detectors, difficult because they could result in long lines that disrupt scheduled service. In addition, multiple access points along extended routes could make the cost of securing each location prohibitive. Balancing the potential economic impact of security enhancements with the benefits of such measures is a difficult challenge. Securing the nation’s passenger rail systems is a shared responsibility requiring coordinated action on the part of federal, state, and local governments; the private sector; and rail passengers who ride these systems. Since the September 11th attacks, the role of federal agencies in securing the nation’s transportation systems, including passenger rail, have continued to evolve. Prior to September 11th, FTA and FRA, within DOT, were the primary federal entities involved in passenger rail security matters. In response to the attacks of September 11th, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring the security of all modes of transportation, although its provisions focus primarily on aviation security. The act also gives TSA regulatory authority for security over all transportation modes. With the passage of the Homeland Security Act of 2002, TSA was transferred, along with over 20 other agencies, to the Department of Homeland Security. The Intelligence Reform and Terrorism Prevention Act of 2004 requires the Secretary of Homeland Security, working jointly with the Secretary of Transportation, to develop a National Strategy for Transportation Security and transportation modal security plans. TSA issued the National Strategy for Transportation Security in 2005. In addition, the DHS National Infrastructure Protection Plan (NIPP) required the development of a Transportation Sector Specific Plan. In accordance with the NIPP, a December 2006 Executive Order required the Secretary of Homeland Security to develop a TSSP by December 31, 2006, and supporting plans for each mode of surface transportation not later than 90 days after completion of the TSSP. According to the NIPP, sector specific plans should, among other things, define the goals and objectives to secure the sector, assess the risks facing the sector, identify the critical assets and infrastructure and develop programs to protect them, and develop security partnerships with industry stakeholders within the sector. As of February 2007, TSA had not yet issued the TSSP or the supporting plans for each surface transportation mode. Within DHS, OGT, formerly the Office for Domestic Preparedness (ODP), has become the federal source for security funding of passenger rail systems. OGT is the principal component of DHS responsible for preparing the United States against acts of terrorism and has primary responsibility within the executive branch for assisting and supporting DHS, in coordination with other directorates and entities outside of the department, in conducting risk analysis and risk management activities of state and local governments. In carrying out its mission, OGT provides training, funds for the purchase of equipment, support for the planning and execution of exercises, technical assistance, and other support to assist states, local jurisdictions, and the private sector to prevent, prepare for, and respond to acts of terrorism. OGT created and is administering two grant programs focused specifically on transportation security, the Transit Security Grant Program and the Intercity Passenger Rail Security Grant Program. These programs provide financial assistance to address security preparedness and enhancements for passenger rail and transit systems. During fiscal year 2006, OGT provided $110 million to passenger rail transit agencies through the Transit Security Grant Program and about $7 million to Amtrak through the Intercity Passenger Rail Security Grant Program. During fiscal year 2007, OGT plans to distribute $156 million of for rail and bus security grants and $8 million to Amtrak. While TSA is the lead federal agency for ensuring the security of all transportation modes, FTA conducts safety and security activities, including training, research, technical assistance, and demonstration projects. In addition, FTA promotes safety and security through its grant- making authority. FRA has regulatory authority for rail safety over commuter rail operators and Amtrak, and employs over 400 rail inspectors that periodically monitor the implementation of safety and security plans at these systems. State and local governments, passenger rail operators, and private industry are also important stakeholders in the nation’s rail security efforts. State and local governments may own or operate a significant portion of the passenger rail system. Passenger rail operators, which can be public or private entities, are responsible for administering and managing passenger rail activities and services. Passenger rail operators can directly operate the service provided or contract for all or part of the total service. Although all levels of government are involved in passenger rail security, the primary responsibility for securing passenger rail systems rests with passenger rail operators. Risk management is a tool for informing policy makers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. In recent years, the President, through Homeland Security Presidential Directives (HSPD), and Congress, through the Intelligence Reform and Terrorism Prevention Act of 2004, provided for federal agencies with homeland security responsibilities to apply risk-based principles to inform their decision making regarding allocating limited resources and prioritizing security activities. The 9/11 Commission recommended that the U.S. government should identify and evaluate the transportation assets that need to be protected, set risk-based priorities for defending them, select the most practical and cost-effective ways of doing so, and then develop a plan, budget, and funding to implement the effort. Further, the Secretary of DHS has made risk-based decision- making a cornerstone of departmental policy. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. Risk management, as applied in the homeland security context, can help federal decision-makers determine where and how to invest limited resources within and among the various modes of transportation. The Homeland Security Act of 2002 also directed the department’s Directorate of Information Analysis and Infrastructure Protection to use risk management principles in coordinating the nation’s critical infrastructure protection efforts. This includes integrating relevant information, analysis, and vulnerability assessments to identify priorities for protective and support measures by the department, other federal agencies, state and local government agencies and authorities, the private sector, and other entities. Homeland Security Presidential Directive 7 and the Intelligence Reform and Terrorism Prevention Act of 2004 further define and establish critical infrastructure protection responsibilities for DHS and those federal agencies given responsibility for particular industry sectors, such as transportation. In June 2006, DHS issued the NIPP, which named TSA as the primary federal agency responsible for coordinating critical infrastructure protection efforts within the transportation sector. In fulfilling its responsibilities under the NIPP, TSA must conduct and facilitate risk assessments in order to identify, prioritize, and coordinate the protection of critical transportation systems infrastructure, as well as develop risk based priorities for the transportation sector. To provide guidance to agency decision makers, we have created a risk management framework, which is intended to be a starting point for applying risk based principles. Our risk management framework entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. DHS’s NIPP describes a risk management process that closely mirrors our risk management framework. Setting strategic goals, objectives, and constraints is a key first step in applying risk management principles and helps to ensure that management decisions are focused on achieving a purpose. These decisions should take place in the context of an agency’s strategic plan that includes goals and objectives that are clear and concise. These goals and objectives should identify resource issues and external factors to achieving the goals. Further, the goals and objectives of an agency should link to a department’s overall strategic plan. The ability to achieve strategic goals depends, in part, on how well an agency manages risk. The agency’s strategic plan should address risk related issues that are central to the agency’s overall mission. Risk assessment, an important element of a risk based approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application often involves assessing three key elements—threat, vulnerability, and criticality or consequence. A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. A criticality or consequence assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes are relatively more important to protect from attack. Information from these three assessments contributes to an overall risk assessment that characterizes risks on a scale such as high, medium, or low and provides input for evaluating alternatives and management prioritization of security initiatives. The risk assessment element in the overall risk management cycle may be the largest change from standard management steps and can be important to informing the remaining steps of the cycle. DHS has made progress in assessing the risks facing the U.S. passenger rail system, but has not issued a plan based on those risk assessments for securing the entire transportation sector and supporting plans for each mode of transportation, including passenger rail. The DHS OGT developed and implemented a risk assessment methodology to help passenger rail operators better respond to terrorist attacks and prioritize security measures. Passenger rail operators must have completed a risk assessment to be eligible for financial assistance through the fiscal year 2007 OGT Transit Security Grant Program, which includes funding for passenger rail. To receive grant funding, rail operators are also required to have a security and emergency preparedness plan that identifies how the operator intends to respond to security gaps identified by risk assessments. As of February 2007, OGT had completed or planned to conduct risk assessments of most passenger rail operators. According to rail operators, OGT’s risk assessment process enabled them to prioritize investments based on risk and allowed them to target and allocate resources towards security measures that will have the greatest impact on reducing risk across their rail systems. Further, we reported in September 2005 that TSA had not completed a comprehensive risk assessment of the entire passenger rail system. TSA had begun to assess risks to the passenger rail system, including completing an overall threat assessment for both mass transit and passenger and freight rail modes. TSA also conducted criticality assessments of nearly 700 passenger rail stations and had begun conducting assessments for other passenger rail assets such as bridges and tunnels. TSA reported that it planned to rely on asset criticality rankings to prioritize which assets it would focus on in conducting vulnerability assessments to determine which passenger rail assets are vulnerable to attack. For assets that are deemed to be less critical, TSA has developed a software tool that it has made available to passenger rail and other transportation operators for them to use on a voluntary basis to assess the vulnerability of their assets. We reported that, until all three assessments of passenger rail systems—threat, criticality, and vulnerability—have been completed, and until TSA determined how to use the results of these assessments to analyze and characterize the level of risk (high, medium, or low), it will be difficult to prioritize passenger rail assets and guide investment decisions about protecting them. More recently, in January 2007, TSA reported taking additional actions to assess the risks facing the U.S. passenger rail system. For example, TSA reported that its surface transportation security inspectors are working with rail transit agencies to update risk assessments that FTA and FRA conducted after September 11, and is also conducting additional security assessments of rail transit agencies. TSA also expected that the 50 largest rail transit agencies would complete security self assessments in early 2007. According to TSA, the agency is using the results of these assessments to set priorities and identify baseline security standards for the passenger rail industry. For example, the agency recently reported that it has identified underground and underwater rail infrastructure and high density passenger rail stations as the critical assets most at risk. According to TSA, the agency prioritized a list of the underwater rail tunnels deemed to be at highest risk, and plans to conduct assessments of high-risk rail tunnels. We also reported in September 2005 that DHS was developing, but had not yet completed, a framework intended to help TSA, OGT, and other federal agencies work with their stakeholders to assess risk. This framework is intended to help the private sector and state and local governments develop a consistent approach to analyzing risk and vulnerability across infrastructure types and across entire economic sectors, develop consistent terminology, and foster consistent results. The framework is also intended to enable a federal-level assessment of risk in general, and comparisons among risks, for purposes of resource allocation and response planning. DHS reported that this framework will provide overarching guidance to sector-specific agencies on how various risk assessment methodologies may be used to analyze, normalize, and prioritize risk within and among sectors. We plan to assess DHS and DOT’s progress in enhancing their risk assessment efforts during our follow-on review of passenger rail security. Finalizing a methodology for assessing risk to passenger rail and other transportation modes and conducting risk assessments to determine the areas of greatest need are key steps required in developing a strategy for securing the overall transportation sector and each mode of transportation individually. However, TSA has not issued the required TSSP and supporting plans for securing each mode of transportation. According to TSA, the TSSP and supporting modal plans are in draft, but must be reviewed by DHS and the White House Homeland Security Council before they can be finalized. Until TSA issues the TSSP and modal plans, the agency lacks a clearly communicated strategy with goals and objectives for securing the overall transportation sector, including passenger rail. In addition to ongoing initiatives to enhance passenger rail security conducted by the FTA and FRA before and after September 11, 2001, TSA issued security directives to passenger rail operators after the March 2004 terrorist attacks on the rail system in Madrid. However, federal and rail industry stakeholders have questioned the extent that these directives were based on industry best practices and expressed confusion about how TSA would monitor compliance with the directives. Since the completion of our work on passenger rail security, TSA has reported taking additional actions to strengthen the security of the passenger rail system. For example, TSA tested rail security technologies, developed training tools for rail workers, and issued a proposed rule in December 2006 regarding passenger and freight rail security, among other efforts. TSA has also taken steps to better coordinate with DOT regarding rail security roles and responsibilities and has worked to develop more effective partnerships with industry stakeholders. The memorandum of understanding between DHS and DOT was updated to include specific agreements between TSA and FTA in September 2005 and between TSA and FRA in September 2006 to delineate security-related roles and responsibilities, among other things, for passenger rail and mass transit. In addition, TSA established an Office of Transportation Sector Network Management and offices for each mode of transportation to develop security policies and partnerships with industry stakeholders, including passenger rail and other surface modes. Prior to the creation of TSA in November 2001, FTA and FRA, within DOT, were primarily responsible for the security of passenger rail systems. These agencies undertook a number of initiatives to enhance the security of passenger rail systems after the September 11th attacks that are still in place today. Specifically, FTA launched a transit security initiative in 2002 that included security readiness assessments, technical assistance, grants for emergency response drills, and training. FTA also instituted the Transit Watch campaign in 2003—a nationwide safety and security awareness program designed to encourage the participation of transit passengers and employees in maintaining a safe transit environment. The program provides information and instructions to transit passengers and employees so that they know what to do and whom to contact in the event of an emergency in a transit setting. FTA plans to continue this initiative, in partnership with TSA and OGT, and offer additional security awareness materials that address unattended bags and emergency evacuation procedures for transit agencies. In addition, in November 2003, FTA issued its Top 20 Security Program Action Items for Transit Agencies, which recommended measures for passenger rail operators to include into their security programs to improve both security and emergency preparedness. FTA has also used research and development funds to develop guidance for security design strategies to reduce the vulnerability of transit systems to acts of terrorism. Further, in November 2004, FTA provided rail operators with security considerations for transportation infrastructure. This guidance provides recommendations intended to help operators deter and minimize attacks against their facilities, riders, and employees by incorporating security features into the design of rail infrastructure. FRA has also taken a number of actions to enhance passenger rail security since September 11, 2001. For example, it has assisted commuter railroads in developing security plans, reviewed Amtrak’s security plans, and helped fund FTA security readiness assessments for commuter railroads. In the wake of the Madrid terrorist bombings in March 2004, nearly 200 FRA inspectors, in cooperation with TSA, conducted inspections of each of the 18 commuter railroads and Amtrak to determine what additional security measures had been put into place to prevent a similar occurrence in the United States. FRA also conducted research and development projects related to passenger rail security. These projects included rail infrastructure security and trespasser monitoring systems and passenger screening and manifest projects, including explosives detection. Although FTA and FRA now play a supporting role in transportation security matters since the creation of TSA, they remain important partners in the federal government’s efforts to strengthen rail security, given their role in funding and regulating the safety of passenger rail systems. Moreover, as TSA moves ahead with its passenger rail security initiatives, FTA and FRA are continuing their passenger rail security efforts. In May 2004, TSA issued security directives to the passenger rail industry to establish standard security measures for all passenger rail operators, including Amtrak. However, as we previously reported, it was unclear how TSA developed the requirements in the directives, how TSA planned to monitor and ensure compliance, how rail operators were to implement the measures, and which entities were responsible for their implementation. According to TSA, the directives were based upon FTA and American Public Transportation Association best practices for rail security. Specifically, TSA stated that it consulted a list of the top 20 actions FTA identified that rail operators can take to strengthen security. While some of the directives’ requirements correlate to information contained in the FTA guidance, the source for many of the requirements is unclear. Amtrak and FRA officials also raised concerns about some of the directives. For example, FRA officials stated that current FRA safety regulations requiring engineer compartment doors be kept unlocked to facilitate emergency escapes conflicts with the TSA security directive requirement that doors equipped with locking mechanisms be kept locked. Other passenger rail operators we spoke with during our review stated that TSA did not adequately consult with the rail industry prior to developing and issuing these directives. In January 2007, TSA stated that it recognizes the need to closely partner with the passenger rail industry to develop security standards and directives. As we reported in September 2005, rail operators are required to allow TSA and DHS to perform inspections, evaluations, or tests based on execution of the directives at any time or location. However, we reported that some passenger rail operators have expressed confusion and concern about the role of TSA’s inspectors and the potential that TSA inspections could be duplicative of other federal and state rail inspections, such as FRA inspections. Since we issued our report, TSA officials reported that the agency has hired 100 surface transportation inspectors, whose stated mission is to, among other duties, monitor and enforce compliance with TSA’s rail security directives. Further, in September 2006, FRA’s and TSA’s roles and responsibilities for compliance inspections were outlined in an annex to the existing memorandum of understanding between DHS and DOT. The annex provides that when an FRA inspector observes a security issue during an inspection, this information will be provided to TSA. Similarly, if a TSA inspector observes a safety issue, this information will be provided to FRA. According to TSA, since the initial deployment of surface inspectors, these inspectors have developed relationships with security officials in passenger rail and transit systems, coordinated access to operations centers, participated in emergency exercises, and provided assistance in enhancing security. We will continue to assess TSA’s efforts to enforce compliance with rail security requirements, including those in the December 2006 proposed rule on rail security, during our follow-on review of passenger rail security. TSA Has Reported Taking Additional Actions to Strengthen Passenger Rail Security, Improve Coordination with DOT, and Develop Industry Partnerships In January 2007, TSA identified additional actions they had taken to strengthen passenger rail security. We have not verified or evaluated these actions. These actions include: National explosive canine detection teams: Since late 2005, TSA reported that it has trained and deployed 53 canine teams to 13 mass transit systems to help detect explosives in the passenger rail system and serve as a deterrent to potential terrorists. Visible Intermodal Prevention and Response Teams: This program is intended to provide teams of law enforcement, canines, and inspection personnel to mass transit and passenger rail systems to deter and detect potential terrorist actions. Since the program’s inception in December 2005, TSA reported conducting more than 25 exercises at mass transit and passenger rail systems throughout the nation. Mass Transit and Passenger Rail Security Information Sharing Network: According to TSA, the agency initiated this program in August 2005 to develop information sharing and dissemination processes regarding passenger rail and mass transit security across the federal government, state and local governments, and rail operators. National Transit Resource Center: TSA officials stated that they are working with FTA and DHS OGT to develop this center, which will provide transit agencies nationwide with pertinent information related to transit security, including recent suspicious activities, promising security practices, new security technologies, and other information. National Security Awareness Training Program for Railroad Employees: TSA officials stated that the agency has contracted to develop and distribute computer based training for passenger rail, rail transit, and freight rail employees. The training will include information on identifying security threats, observing and reporting suspicious activities and objects, mitigating security incidents, and other related information. According to TSA, the training will be distributed to all passenger and freight rail systems. Transit Terrorist Tool and Tactics: This training course is funded through the Transit Security Grant Program and teaches transit employees how to prevent and respond to a chemical, biological, radiological, nuclear, or explosive attack. According to TSA, this course was offered for the first time during the fall of 2006. National Tunnel Security Initiative: This DHS and DOT initiative aims to identify and assess risks to underwater tunnels, prioritize security funding to the most critical areas, and develop technologies to better secure underwater tunnels. According to TSA, this initiative has identified a list of 29 critical underwater rail transit tunnels. DHS and TSA have also sought to enhance passenger rail security by conducting research on technologies related to screening passengers and checked baggage in the passenger rail environment. For example, TSA conducted a Transit and Rail Inspection Pilot, a $1.5 million effort to test the feasibility of using existing and emerging technologies to screen passengers, carry-on items, checked baggage, cargo, and parcels for explosives. According to TSA, the agency completed this pilot in July 2004. TSA officials told us that based upon preliminary analyses, the screening technologies and processes tested would be very difficult to implement on heavily used passenger rail systems because these systems carry high volumes of passengers and have multiple points of entry. However, TSA officials added that the screening processes used in the pilot may be useful on certain long-distance intercity train routes, which make fewer stops. Further, TSA officials stated that screening could be used either randomly or for all passengers during certain high-risk events or in areas where a particular terrorist threat is known to exist. For example, screening technology similar to that used in the pilot was used by TSA to screen certain passengers and belongings in Boston and New York rail stations during the 2004 Democratic and Republican national conventions. According to TSA, the agency is also researching and developing other passenger rail security technologies, including closed circuit television systems that can detect suspicious behavior, mobile passenger screening checkpoints to be used at rail stations, bomb resistant trash cans, and explosive detection equipment for use in the rail environment. Finally, TSA recently reported that the DHS Science and Technology (S&T) Directorate conducted a rail security pilot, which tested the effectiveness of explosive detection technologies in partnership with the Port Authority of New York and New Jersey. In December 2006, TSA issued a proposed rule regarding passenger and freight rail security requirements. TSA’s proposed rule would require that passenger and freight rail operators, certain facilities that ship or receive hazardous materials by rail, and rail transit systems take the following actions: Designate a rail security coordinator to be available to TSA on a 24 hour, seven day a week basis to serve as the primary contact for the receipt of intelligence and other security related information. Immediately report incidents, potential threats, and security concerns to TSA. Allow TSA and DHS officials to enter and conduct inspections, test, and perform other duties within their rail systems. Provide TSA, upon request, with the location and shipping information of rail cars that contain a specific category and quantity of hazardous materials within one hour of receiving the request from TSA. Provide for a secure chain of custody and control of rail cars containing a specified quantity and type of hazardous material. The period for public comment on the proposed rule is scheduled to close in February 2007. TSA plans to review these comments and issue a final rule in the future. With multiple DHS and DOT stakeholders involved in securing the U.S. passenger rail system and inherent relationships between security and safety, the need to improve coordination between the two agencies has been a consistent theme in our prior work in this area. In response to a previous recommendation we made, DHS and DOT signed a memorandum of understanding (MOU) to develop procedures by which the two departments could improve their cooperation and coordination for promoting the safe, secure, and efficient movement of people and goods throughout the transportation system. The MOU defines broad areas of responsibility for each department. For example, it states that DHS, in consultation with DOT and affected stakeholders, will identify, prioritize, and coordinate the protection of critical infrastructure. The MOU between DHS and DOT represents an overall framework for cooperation that is to be supplemented by additional signed agreements, or annexes, between the departments. These annexes are to delineate the specific security related roles, responsibilities, resources, and commitments for mass transit, rail, research and development, and other matters. TSA signed annexes to the MOU with FRA in September 2006 and FTA in September 2005 describing the roles and responsibilities of each agency regarding passenger rail security. These annexes also describe how TSA and these DOT agencies will coordinate security related efforts, avoid duplicating efforts, and improve coordination and communication with industry stakeholders. In addition to the federal government, public and private rail operators share responsibility for securing passenger rail systems. As such, the need for TSA and other federal agencies to develop partnerships and coordinate their efforts with these operators is critical. To better coordinate and develop partnerships with industry stakeholders, TSA has established an Office of Transportation Sector Network Management (TSNM), which includes offices for each mode of transportation, such as mass transit (includes passenger rail), highways, including commercial vehicles, and pipelines. According to TSA, the TSNM Mass Transit Division coordinates federal security activities in the mass transit and passenger rail modes and works to develop partnerships with passenger rail operators, federal agencies, and industry associations. TSA also reports that it is working with industry partners to develop baseline security standards for passenger rail and other surface modes. We will continue to assess TSA’s efforts in strengthening federal and private sector partnerships during our follow-on work on passenger rail security. U.S. passenger rail operators have taken numerous actions to secure their rail systems since the terrorist attacks of September 11, 2001, in the United States, and the March 11, 2004, attacks in Madrid. These actions included both improvements to system operations and capital enhancements to a system’s facilities, such as tracks, buildings, and train cars. All of the U.S. passenger rail operators we contacted have implemented some types of security measures—such as increased numbers and visibility of security personnel and customer awareness programs—that were generally consistent with those we observed in select countries in Europe and Asia. We also identified three rail security practices—covert testing, random screening of passengers and their baggage, and centralized research and testing—utilized by foreign operators or their governments that were not utilized, at the time of our review, by domestic rail operators or the U.S. government. Both U.S. and foreign passenger rail operators we contacted have implemented similar improvements to enhance the security of their systems. A summary of these efforts follows. Customer awareness: Customer awareness programs we observed used signage and announcements to encourage riders to alert train staff if they observed suspicious packages, persons, or behavior. Of the 32 domestic rail operators we interviewed, 30 had implemented a customer awareness program or made enhancements to an existing program. Foreign rail operators we visited also attempted to enhance customer awareness. For example, 11 of the 13 operators we interviewed had implemented a customer awareness program. Increased number and visibility of security personnel: Of the 32 U.S. rail operators we interviewed, 23 had increased the number of security personnel they utilized since September 11th, to provide security throughout their system or had taken steps to increase the visibility of their security personnel. Several U.S. and foreign rail operators we spoke with had instituted policies such as requiring their security staff, in brightly colored vests, to patrol trains or stations more frequently, so they were more visible to customers and potential terrorists or criminals. Operators believed that these policies made it easier for customers to contact security personnel in the event of an emergency, or if they spotted a suspicious item or person. At foreign sites we visited, 10 of the 13 operators had increased the number of their security officers throughout their systems in recent years because of the perceived increase in risk of a terrorist attack. Increased use of canine teams: Of the 32 U.S. passenger rail operators we contacted, 21 were using canines to patrol their facilities or trains. Often, these units are used to detect the presence of explosives, and may be called in when a suspicious package is detected. In foreign countries we visited, passenger rail operators’ use of canines varied. In some Asian countries, canines were not culturally accepted by the public and thus were not used for rail security purposes. As in the United States, and in contrast to Asia, most European passenger rail operators used canines for explosive detection or as deterrents. Employee training: All of the domestic and foreign rail operators we interviewed had provided some type of security training to their staff, either through in-house personnel or an external provider. In many cases, this training consisted of ways to identify suspicious items and persons and how to respond to events once they occur. For example, the London Underground and the British Transport Police developed the “HOT” method for its employees to use to identify suspicious items in the rail system. In the HOT method, employees are trained to look for packages or items that are Hidden, Obviously suspicious, and not Typical of the environment. Passenger and baggage screening practices: Some domestic and foreign rail operators have trained employees to recognize suspicious behavior as a means of screening passengers. Eight U.S. passenger rail operators we contacted were utilizing some form of behavioral screening. Abroad, we found that 4 of 13 operators we interviewed had implemented forms of behavioral screening. All of the domestic and foreign rail operators we contacted have ruled out an airport-style screening system for daily use in heavy traffic, where each passenger and the passenger’s baggage are screened by a magnetometer or X-ray machine, based on cost, staffing, and customer convenience factors, among other reasons. Upgrading technology: Many rail operators we interviewed had embarked on programs designed to upgrade their existing security technology. For example, we found that 29 of the 32 U.S. operators had implemented a form of closed circuit television (CCTV) to monitor their stations, yards, or trains. While these cameras cannot be monitored closely at all times, because of the large number of staff that would be required, many rail operators felt that the cameras acted as a deterrent, assisted security personnel in determining how to respond to incidents that had already occurred, and could be monitored if an operator had received information that an incident may occur at a certain time or place in their system. Abroad, all 13 of the foreign rail operators we visited had CCTV systems in place. In addition, 18 of the 32 U.S. rail operators we interviewed had installed new emergency phones or enhanced the visibility of the intercom systems they already had. As in the United States, a few foreign operators had implemented chemical or biological detection devices at these rail stations, but their use was not widespread. Two of the 13 foreign operators we interviewed had implemented these sensors, and both were doing so on an experimental basis. In addition, police officers from the British Transport Police—responsible for policing the rail system in the United Kingdom—were equipped with pagers to detect chemical, biological, or radiological elements in the air, allowing them to respond quickly in case of a terrorist attack using one of these methods. Access control: Tightening access control procedures at key facilities or rights-of-way is another way many rail operators have attempted to enhance security. A majority of domestic and selected foreign passenger rail operators had invested in enhanced systems to control unauthorized access at employee facilities and stations. Specifically, 23 of the 32 U.S. operators had installed a form of access control at key facilities and stations. All 13 foreign operators had implemented some form of access control to their critical facilities or rights-of-way. Rail system design and configuration: In an effort to reduce vulnerabilities to terrorist attack and increase security, passenger rail operators in the United States and abroad have been, or are now beginning to, incorporate security features into the design of new and existing rail infrastructure, primarily rail stations. Foreign rail operators had taken steps to remove traditional trash bins from their systems. Of the 13 operators we visited, 8 had either removed their trash bins entirely or replaced them with blast-resistant cans or transparent receptacles. Many foreign rail operators are also incorporating aspects of security into the design of their rail infrastructure. Of the 13 operators we visited, 11 had attempted to design new facilities with security in mind and had retrofitted older facilities to incorporate security-related modifications. For example, one foreign operator we visited was retrofitting its train cars with windows that passengers could open in the event of a chemical attack. In addition, the London Underground incorporates security into the design of all its new stations as well as when existing stations are modified. We observed several security features in the design of Underground stations, such as using vending machines that have no holes that someone could use to hide a bomb, and sloped tops to reduce the likelihood that a bomb can be placed on top of the machine. In addition, stations are designed to provide staff with clear lines of sight to all areas of the station, such as underneath benches or ticket machines, and station designers try to eliminate or restrict access to any recessed areas where a bomb could be hidden. Figure 1 shows a diagram of several security measures that we observed in passenger rail stations both in the United States and abroad. K-9 patrol unit(s) In our past work, we found that Amtrak faces security challenges unique to intercity passenger rail systems. First, Amtrak operates over thousands of miles, often far from large population centers. This makes its route system more difficult to patrol and monitor than one contained in a particular metropolitan region, and it causes delays in responding to incidents when they occur in remote areas. Also, outside the Northeast Corridor, Amtrak operates almost exclusively on tracks and in stations owned by freight rail companies. This means that Amtrak often cannot make security improvements to others’ rights-of-way or station facilities and that it is reliant on the staff of other organizations to patrol their facilities and respond to incidents that may occur. Furthermore, with over 500 stations, only half of which are staffed, screening even a small portion of the passengers and baggage boarding Amtrak trains is difficult. Finally, Amtrak’s financial condition has never been strong—Amtrak has been on the edge of bankruptcy several times. We reported in September 2005 that Amtrak had taken some actions to enhance security throughout its intercity passenger rail system. For example, Amtrak initiated a passenger awareness campaign, began enforcing restrictions on carry-on luggage that limit passengers to two carry-on bags, not exceeding 50 pounds; began requiring passengers to show identification after boarding trains; increased the number of canine units patrolling its system looking for explosives or narcotics; and assigned some of its police to ride trains in the Northeast Corridor. Also, Amtrak instituted a policy of randomly inspecting checked baggage on its trains. Amtrak was also making improvements to the emergency exits in certain tunnels to make evacuating trains in the tunnels easier in the event of a crash or terrorist attack. More recently, in January 2007, FRA reported that a systematic review of Amtrak’s security policies and programs had been completed. According to FRA, the agency is currently working with Amtrak to implement the recommendations of this review. While many of the security practices we observed in foreign rail systems are similar to those U.S. passenger rail operators are implementing, we identified three foreign practices that were not currently in use among the U.S. passenger rail operators we contacted as of September 2005, nor were they performed by the U.S. government. These practices are as follows. Covert testing: Two of the 13 foreign rail systems we visited utilized covert testing to keep employees alert about their security responsibilities. Covert testing involves security staff staging unannounced events to test the response of railroad staff to incidents such as suspicious packages or setting off alarms. In one European system, this covert testing involves security staff placing suspicious items throughout their system to see how long it takes operating staff to respond to the item. Similarly, one Asian rail operator’s security staff will break security seals on fire extinguishers and open alarmed emergency doors randomly to see how long it takes staff to respond. TSA conducts covert testing of passenger and baggage screening in aviation, but has not conducted such testing in the rail environment. Random screening: Of the 13 foreign operators we interviewed, 2 have some form of random screening of passengers and their baggage in place. Prior to the July 2005 London bombings, no passenger rail operators in the United States were practicing random passengers or baggage screening. However, during the Democratic National Convention in 2004, the Massachusetts Bay Transportation Authority instituted a system of random screening of passengers. National government clearinghouse on technologies and best practices: According to passenger rail operators in five countries we visited, their national governments had centralized the process for performing research and development of passenger rail security technologies and maintained a clearinghouse of technologies and security best practices for passenger rail operators. We reported in September 2005 that no U.S. federal agency had compiled or disseminated information on research and development and other best practices for U.S. rail operators. Implementing covert testing, random screening, or a government- sponsored clearinghouse for technologies and best practices in the U.S. could pose political, legal, fiscal, and cultural challenges because of the differences between the U.S. and these foreign nations. Many foreign nations have dealt with terrorist attacks on their public transportation systems for decades, compared with the United States, where rail has not been specifically targeted by terrorists. According to foreign rail operators, these experiences have resulted in greater acceptance of certain security practices, such as random searches, which the U.S. public may view as a violation of their civil liberties or which may discourage them from using public transportation. The impact of security measures on passengers is an important consideration for domestic rail operators, since most passengers could choose another means of transportation, such as a personal automobile. As such, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from rail and into their cars. In contrast, the citizens of the European and Asian countries we visited are more dependent on public transportation than most U.S. residents and therefore may be more willing to accept intrusive security measures. Nevertheless, in order to identify innovative security measures that could help further mitigate terrorism- risks to rail assets—especially as part of a broader risk management approach discussed earlier—it is important to consider the feasibility and costs and benefits of implementing the three rail security practices we identified in foreign countries. Officials from DHS, DOT, passenger rail industry associations, and rail systems we interviewed told us that operators would benefit from such an evaluation. Since our report on passenger rail security was issued, TSA has reported taking steps to coordinate with foreign passenger rail operators and governments to identify security best practices. For example, TSA reported working with British rail security officials to identify best practices for detecting and handling suspicious packages in rail systems. In addition, in January 2007, a TSA official stated that the agency was developing a clearinghouse of transportation security technologies, but a completion date for this effort was not currently available. In conclusion, Mr. Chairman, the 2005 London rail bombings and the 2006 rail attacks in Mumbai, India highlight the inherent vulnerability of passenger rail and other surface transportation systems to terrorist attack. Moreover, securing rail and other surface transportation systems is a daunting task, requiring that the federal government develop clear strategies that are based on an assessment of the risks to the security of the systems, including goals and objectives, for strengthening the security of these systems. Since our September 2005 report, DHS components have taken steps to assess the risks to the passenger rail system, such as working with rail operators to update prior risk assessments and facilitating rail operator security self assessments. According to TSA, the agency plans to use these assessment results to set priorities for securing rail assets deemed most at risk, such as underground and underwater rail infrastructure and high density passenger rail stations. A comprehensive assessment of the risks facing the transportation sector and each mode, including passenger rail, will be a key component of the TSSP and supporting plans for each mode of transportation. Until TSA issues these plans, however, the agency lacks a clearly communicated strategy with goals and objectives for securing the overall transportation sector and each mode of transportation, including passenger rail. TSA has also taken steps improve coordination with federal, state, and local governments, and has reported taking steps to strengthen partnerships with passenger rail industry stakeholders to enhance the security of the passenger rail system. As TSA moves forward to issue the TSSP and supporting plans for each mode of transportation, it will be important that the agency articulate its strategy for securing rail and other modes to those government agencies and industry stakeholders that share the responsibility for securing these systems. We will continue to assess DHS and DOT’s efforts to secure the U.S. passenger rail system during follow-on work to be initiated later this year. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512- 3404. Individuals making key contributions to this testimony include John Hansen, Assistant Director, Chris Currie, and Tom Lombardi. Passenger Rail Security: Federal Strategy and Enhanced Coordination Needed to Prioritize and Guide Security Efforts. GAO-07-442T. Washington, D.C.: February 6, 2007. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-07-225T. Washington, D.C.: January 18, 2007. Passenger Rail Security: Evaluating Foreign Security Practices and Risk Can Help Guide Security Efforts. GAO-06-557T. Washington, D.C.: March 29, 2006. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-06-181T. Washington, D.C.: October 20, 2005. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-05-851. Washington, D.C.: September 9 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Rail Security: Some Actions Taken to Enhance Passenger and Freight Rail Security, but Significant Challenges Remain. GAO-04-598T. Washington, D.C.: March 23, 2004. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Rail Safety and Security: Some Actions Already Taken to Enhance Rail Security, but Risk-based Plan Needed. GAO-03-435. Washington, D.C.: April 30, 2003. Transportation Security: Post-September 11th Initiatives and Long-term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges. GAO-03-263. Washington, D.C.: December 13, 2002. Mass Transit: Challenges in Securing Transit Systems. GAO-02-1075T. Washington, D.C.: September 18, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The 2005 London subway bombings and 2006 rail attacks in Mumbai, India highlighted the vulnerability of passenger rail and other surface transportation systems to terrorist attack and demonstrated the need for greater focus on securing these systems. This testimony is based primarily on GAO's September 2005 passenger rail security report and selected program updates obtained in January 2007. Specifically, it addressees (1) the extent to which the Department of Homeland Security (DHS) has assessed the risks facing the U.S. passenger rail system and developed a strategy based on risk assessments for securing all modes of transportation, including passenger rail; (2) the actions that the Transportation Security Administration (TSA) and other federal agencies have taken to enhance the security of the U.S. passenger rail system, improve federal coordination, and develop industry partnerships; and (3) the security practices that domestic and selected foreign passenger rail operators have implemented to enhance security. The DHS Office of Grants and Training and TSA have begun to assess the risks facing the passenger rail system. However, we reported in September 2005 that TSA had not completed a comprehensive risk assessment of passenger rail. We found that, until TSA does so, the agency may be limited in its ability to prioritize passenger rail assets and help guide security investments. We also reported that DHS had begun, but not yet completed, a framework to help agencies and the private sector develop a consistent approach for analyzing and comparing risks among and across critical sectors. Since that time, TSA has reported taking additional steps to assess the risks to the passenger rail system. However, TSA has not yet issued the required Transportation Sector Specific Plan and supporting plans for passenger rail and other surface transportation modes, based on risk assessments. Until TSA does so, the agency lacks a clearly communicated strategy with goals and objectives for securing the transportation sector, including passenger rail. After September 11, the Department of Transportation (DOT) initiated efforts to strengthen passenger rail security. TSA has also taken actions to strengthen rail security, including issuing security directives, testing security technologies, and issuing a proposed rule for passenger and freight rail security, among other efforts. However, federal and rail industry stakeholders have questioned the extent to which TSA's directives were based on industry best practices. TSA has also taken steps to strengthen coordination with DOT and develop partnerships with industry stakeholders. DHS and DOT have updated their memorandum of understanding to clarify their respective security roles and responsibilities for passenger rail. TSA also established an Office of Transportation Sector Network Management and offices for each transportation mode to develop security policies and work to strengthen industry partnerships for passenger rail and other surface modes. U.S. and foreign passenger rail operators GAO visited have also taken actions to secure their rail systems. Most had implemented customer security awareness programs, increased security personnel, increased the use of canines to detect explosives, and enhanced employee training programs. GAO also observed security practices among foreign passenger rail systems that are not currently used by U.S. rail operators or by the U.S. government, which could be considered for use in the U.S. For example, some foreign rail operators randomly screen passengers or use covert testing to help keep employees alert to security threats. While introducing these security practices in the U.S may pose political, legal, fiscal, and cultural challenges, they warrant further examination. TSA has also reported taking steps to identify foreign best practices for rail security and working to develop a clearinghouse of security technologies.
Private sector companies receive billions of dollars annually in federal government contracts for goods and services. Data from GSA show that federal contracts valued at $25,000 or more totaled almost $176 billion in fiscal year 1994. Approximately 22 percent of the labor force, 26 million workers, is employed by companies with federal contracts and subcontracts, according to fiscal year 1993 estimates of the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP). Federal law and an executive order place greater responsibilities on federal contractors compared with other employers in some areas of workplace activity. For example, federal contractors must comply with Executive Order 11246, which requires a contractor to develop an affirmative action program detailing the steps that the contractor will take and has already taken to ensure equal employment opportunity for all workers, regardless of race, color, religion, sex, or national origin. In addition, the Service Contract Act and the Davis-Bacon Act require the payment of area-prevailing wages and benefits on federal contracts in the service and construction industries, respectively. Recently, the administration issued an executive order that would bar federal contractors from receiving contracts if they hire permanent replacements for striking workers and another executive order that would bar contractors for hiring illegal immigrants. Additionally, under the Contract Work Hours and Safety Standards Act, Labor may debar contractors in the construction industry for “repeated willful or grossly negligent” violations of safety and health standards issued under the Occupational Safety and Health Act. Under federal procurement regulations, agencies may deny an award of a contract or debar or suspend a contractor for a variety of reasons, including failure to comply with safety and health standards. Before awarding a contract, an agency must make a positive finding that the bidder is responsible as defined in federal procurement regulations. Also, federal agencies can debar or suspend companies for any “cause of so serious or compelling a nature that it affects the present responsibility of a Government contractor or subcontractor.” Debarred companies are not allowed to receive federal contracts (or other forms of federal financial assistance, such as grants and loans) for a period of time, generally not to exceed 3 years. Suspended companies are temporarily disqualified from receiving federal contracts or other forms of federal financial assistance. In determining whether a federal contractor is responsible, agency awarding and debarring officials could consider compliance with safety and health standards. To help foster consistency among agency regulations concerning debarment and suspension, Executive Order 12549, issued in February 1986, established the Interagency Committee on Debarment and Suspension, which consists of agency representatives designated by the Office of Management and Budget (OMB). This committee meets monthly and provides the opportunity for agency representatives (primarily debarring officials) to share information about companies that they are either trying to debar or suspend or to bring into compliance with various laws and regulations in order to avoid having to take an adverse contracting action. At its monthly meetings, the committee also helps interpret regulations on debarment or suspension issued by OMB. When more than one agency has an interest in a particular federal contractor, the Interagency Committee coordinates the assignment of lead agency responsibility for any actions taken against that contractor. GSA maintains the Federal Procurement Data System (FPDS) that tracks firms awarded contracts of $25,000 or more in federal funding for products and services. For fiscal year 1994, FPDS tracked information on 179,977 contracts totaling almost $176 billion. Although it is difficult to estimate the number of federal contractors, GSA reports there may be 60,000 federal contractors in that there are as many unique corporate identification codes in FPDS. FPDS contains a variety of information, including the contractor’s name and location, agency the contract is with, principal place of contract performance, and contract dollar amounts awarded. FPDS does not contain information on contractors’ safety and health practices. Most private sector firms—regardless of whether they are federal contractors—must comply with safety and health standards issued under the Occupational Safety and Health Act. The act was meant “to assure safe and healthful working conditions for working men and women.” The Secretary of Labor established OSHA in 1970 to carry out a number of responsibilities under the act, including developing and enforcing safety and health standards, educating workers and employers about workplace hazards, and establishing responsibilities and rights for both employers and employees for the achievement of better safety and health conditions.Even though OSHA has been in existence for 25 years, work-related illness and injury remain a substantial problem. A total of 6,588 workplace fatalities—on average, 18 fatalities a day—were reported to the Bureau of Labor Statistics in 1994, a 4-percent increase over 1993. In addition, a total of 6.8 million injuries and illnesses were reported in 1994. OSHA cites employers for violations of standards covering a variety of threats to workplace safety and health. Safety standards include those designed to protect against workers falling from stairs or scaffolds (walking-working surfaces); from injuries due to inadequate machine guarding (machine guarding); and from electrical hazards (electrical). Some standards (for example, excavations, underground construction, and steel erection) protect against construction-related injuries. Health standards protect against exposure to toxic substances such as lead, asbestos, and bloodborne pathogens (referring to occupational exposures to blood). There are also more generic informational standards relating to the recording and reporting of occupational injuries and illnesses and for informing employees about chemical hazards in the workplace. OSHA may also cite employers for hazards not covered by any standard under Section 5(a)(1) of the Occupational Safety and Health Act, referred to as the General Duty Clause. This clause requires that employers furnish employees a place of work “free from recognized hazards.” OSHA has relied on the General Duty Clause, for example, to regulate employee exposure to tuberculosis in the health care industry. OSHA has also relied on the General Duty Clause to penalize companies for ergonomic hazards such as cumulative trauma disorders, including lower back pain, carpal tunnel syndrome, and tendinitis. OSHA characterizes violations as other-than-serious, serious, willful, or repeat, with civil penalties in specified increasing amounts for these various types of violations. In addition, OSHA designates violations as unclassified when companies make significant concessions to OSHA, perhaps to avoid losing coverage under state workers’ compensation programs or to minimize adverse publicity attached to violations as originally classified. Additional penalties can be assessed either when a company fails to abate a hazard or under OSHA’s “egregious” policy. Failure to abate or correct a prior violation may bring an additional civil penalty for each day that the violation continues beyond the prescribed abatement date. Under OSHA’s “egregious” policy, an employer is cited for each instance of a particular violation—or for each worker exposed to a hazard. Since initiated in 1986, this policy has resulted in penalties for some inspections running into the millions of dollars. Although inspections in which a company is cited in this fashion are not common, the number of these inspections doubled from 8 in fiscal year 1994 to 17 in fiscal year 1995. OSHA is authorized to conduct workplace inspections to determine whether employers are complying with safety and health standards, and to issue citations and assess penalties when an employer is not in compliance. The proposed penalty reflects an OSHA compliance officer’s judgment of the nature and severity of violations. However, these proposed penalties are often reduced. OSHA justifies such reductions as a means to get employers to abate workplace problems quickly by avoiding the contesting of citations. If employers contest citations or proposed penalties, they do not have to abate the cited hazard until the case is resolved, thereby leaving workers unprotected. If cited for violations during an inspection, an employer has 15 working days to either (1) accept the citation, abate the hazards, and pay the penalties; (2) have an informal conference with local OSHA officials and negotiate an informal settlement agreement; or (3) formally contest the citation before the Occupational Safety and Health Review Commission (OSHRC). After reviewing a contested citation, OSHRC may affirm, vacate, or modify OSHA’s citations and proposed penalties. Once the inspection is closed (either because the employer accepted the citation or a contested citation was resolved), the penalty is referred to as the actual penalty. OSHA targets a portion of its inspection resources toward facilities that may be more hazardous to employees. OSHA has recently taken steps to revise its inspection targeting priorities, in which employers in a certain industry are currently treated alike regardless of their individual safety and health performance. By integrating worksite-specific information, including excessive rates of workplace injury and illness and a record of serious and repeat violations, into its targeting procedures, OSHA hopes to enhance the effectiveness of its enforcement system. OSHA maintains a database that tracks all OSHA inspections. The Integrated Management Information Systems (IMIS) database includes over 2 million inspections from 1972 to 1995, with 72,950 closed inspections in 1994 alone in which the employer was cited for at least one violation. IMIS includes such information as whether the inspections were performed by OSHA or a state-operated program, penalty amounts (proposed and actual), the type of violation (for example, serious, willful, or repeat), the standards violated, whether fatalities or injuries occurred, and abatement information. In addition, IMIS includes some data on the worksite inspected, including the type of industry it is engaged in and the number of workers employed. This database does not contain information about whether violators receive federal contracts. Federal contracts have been awarded to employers who have violated occupational safety and health regulations. Restricting our analysis to only those fiscal year 1994 inspections in which the company was assessed a significant proposed penalty of $15,000 or more, we found 261 federal contractors had violated the Occupational Safety and Health Act.Because some of the 261 federal contractors owned more than one worksite, we identified a total of 345 inspections, representing 16 percent of all inspections closed in fiscal year 1994 in which a significant proposed penalty was assessed for OSHA violations (see fig. I.1). Key characteristics of these violators, their federal contracts, and the specific standards violated appear in appendixes II and III. These federal contractors received $38 billion in contracts in fiscal year 1994. Altogether, about 22 percent of the $176 billion in fiscal year 1994 contracts went to these 261 federal contractors (see fig. 1). The size of these federal contracts differed greatly. Over one-third of the 261 federal contractors assessed significant proposed penalties for OSHA violations received less than $1 million each. Nearly 5 percent received more than $500 million each in federal contracts in fiscal year 1994. These 12 companies were General Electric Co. ($8.7 billion); Lockheed-Martin Corp. ($7 billion); Westinghouse Electric Corp. ($4.6 billion); United Technologies Corp. ($2.8 billion); General Motors Corp. ($2.4 billion); The Boeing Co. ($1.3 billion); Textron, Inc. ($1.2 billion); American Telephone and Telegraph (AT&T) ($874 million); Fulcrum II Limited Partnership ($798 million); Dyncorp ($673 million); Exxon Corp. ($532 million); and Tenneco Packaging, Inc. ($505 million). Three-fourths of the $38 billion in contracts awarded in fiscal year 1994 to these federal contractors that were assessed significant proposed penalties for OSHA violations came from the Department of Defense. Within the Department of Defense, the Air Force and the Navy awarded by far the most contract dollars to violators ($11.8 billion and $9.6 billion, respectively). In addition to the Department of Defense, large amounts of contract dollars were awarded to violators by the Department of Energy ($5.8 billion) and the National Aeronautics and Space Administration ($1.2 billion). Other agencies that awarded more than $100 million in contracts to violators include the Department of Agriculture ($382 million), Department of Transportation ($365 million), GSA ($274 million), Department of Justice ($242 million), and the Tennessee Valley Authority ($113 million). (See fig. 2.) Over one-half of the 345 worksites (56 percent) penalized for safety and health violations were engaged in manufacturing. An examination of the violators’ Standard Industrial Classification (SIC) codes shows that many of these worksites manufactured paper, food, or primary and fabricated metals. Although manufacturing is the industry in which most violators were engaged, a significant percentage of worksites (18 percent) were engaged in construction, and this is likely an underestimate because of the difficulties we experienced verifying that worksites inspected in that industry were part of the same company as the federal contractor. (See fig. 3.) (Difficulties we encountered verifying construction worksites are explained in app. I.) Many (68 percent) of the worksites where the violations occurred were relatively small, employing 500 or fewer workers. Just over 15 percent of the worksites were very small, employing 25 or fewer workers. (See fig. 4.) Although few worksites employed large numbers of workers, the federal contractors that own these worksites often employ large numbers of workers and have numerous worksites throughout the country. Examples of these include Boise Cascade Corp.; General Motors Corp.; Georgia-Pacific Corp.; International Paper Co.; Sears Roebuck & Co.; and the United Parcel Service Amer., Inc. (UPS). Some of these federal contractors do billions of dollars in annual sales and employ hundreds of thousands of workers. For example, UPS employs 285,000 workers altogether, although most of the 24 worksites inspected employed fewer than 1,000 workers. One UPS worksite, located in Twin Mountain, New Hampshire, employed only 40 workers. We were unable to determine whether a company’s contract activity occurred at the same worksite where the company was cited for safety and health violations. Data on the place of contract performance were not specific enough to enable us to confirm whether or not the locations were the same as where the OSHA inspections were conducted. It would have been difficult to get companies to confirm whether or not they conducted federal contract work at the particular worksite where the violations occurred. This information might not be readily available or considered confidential or proprietary. Finally, because the nature of some contract work is so dispersed, with contract activity of some form occurring across multiple worksites, it can be difficult for even the company to verify exactly what activities at various worksites were supported by federal contracts. However, it is possible, particularly given the size of some federal contractors, that at least some violations occurred at worksites other than those with contract activity. (See app. I.) The number and nature of the violations for which these 261 federal contractors were cited, the fatalities and injuries associated with violations found in the 345 inspections, and the high penalties assessed suggest that workers were at substantial risk of injury or illness in some workplaces of these contractors. Nevertheless, some of these contractors also operate worksites identified as exemplary with respect to safety and health practices. In addition, the worksites associated with significant proposed penalties represent a small percentage of the total worksites of some contractors that are large companies. Most of the 345 inspections involved at least one violation that was serious (88 percent), posing a risk of death or serious physical harm to workers, or willful (69 percent) in which the employer intentionally and knowingly committed a violation (see fig. 5). Included among these inspections were three in which the contractor was cited under OSHA’s “egregious” policy, situations where OSHA imposes larger total fines by citing the company for every instance of that same violation or for each worker exposed to a hazard. Federal contractors were cited for repeat violations in 29 inspections (8 percent). A repeat violation occurs when the company is cited for a substantially similar violation in the current inspection within 3 years of the final order or abatement date of the previous citation. In only one inspection was a federal contractor assessed additional penalties for failing to abate a hazard; that is, the company failed to correct the same violation for which it was cited in a prior inspection. However, these relatively low rates of citations for repeat violations and penalties for failing to abate hazards may be a reflection of OSHA’s limited resources to return to worksites it has inspected in the past. Only about 1 percent of all fiscal year 1994 inspections were follow-up or monitoring inspections. In addition, OSHA does not currently penalize employers for failing to provide proof that the company has abated the hazard. As a result, OSHA has only the employer’s statement that abatement has taken place unless a follow-up or monitoring inspection is performed. Examples of federal contractors cited for serious, willful, or repeat violations or assessed additional penalties under OSHA’s “egregious” policy or for failing to abate hazards follow: Bath Iron Works Corp. and Boise Cascade Corp. were the only contractors assessed penalties under OSHA’s “egregious” policy. These two contractors were also cited for a number of serious, willful, and repeat violations. Bath Iron Works Corp., a shipbuilding and repair company, was cited for violations of shipyard standards as well as standards for walking-working surfaces, electrical work, and recording and reporting at its worksite in Bath, Maine. Boise Cascade Corp., a manufacturer of wood and paper products, was cited under OSHA’s “egregious” policy for violations in two inspections at its paper mill in Rumford, Maine. This company violated special industry standards for paper mills in one of these inspections as well as standards for machinery and machine guarding, electrical work, and recording and reporting.International Paper Co., in one of six inspections in which this company was assessed a significant proposed penalty, was cited in 1991 for 37 repeat violations at a paper mill in Moss Point, Mississippi. Among the repeat violations, International Paper was cited for failing to protect its workers from burns because of inadequately insulated steam pipes. The company had been cited in 1988 for similar violations. The Gunver Manufacturing Co. in Manchester, Connecticut, was assessed additional penalties for failing to abate a machine-guarding hazard, among other hazards. The first inspection took place in 1992; in two follow-up inspections in 1993 and 1994, OSHA penalized Gunver for failing to abate the hazards cited in the first inspection. At worksites of 50 federal contractors, 35 fatalities and 85 injuries occurred. Fifty-five of the 85 injuries were serious enough for the worker to be hospitalized. The accidents varied depending upon the nature of the work. For example: Acme Steel Co. was cited for hazardous materials violations after one worker died and another was hospitalized from exposure to blast furnace gas due to an equipment failure at a steel mill in Chicago. Rhone Poulenc Basic Chemical, at an industrial chemicals worksite in Martinez, California, was cited for violations of state standards requiring protections against accidental discharge of liquid from above-ground storage tanks and for failing to provide adequate extinguishing equipment. One worker died and another was hospitalized due to chemical burns when they mistakenly extracted a valve, releasing 80,000 gallons of acid sludge from a storage tank. Clean Harbors of Kingston, Inc., was cited when a worker was asphyxiated and died after coworkers were unable to retrieve him from a tank containing chemical sludge when his air supply ran low. This refuse collection and disposal facility in Providence, Rhode Island, was cited for violating the General Duty Clause because of oversights in providing rescue capability, inadequate ventilation, and failure to sample the air in the confined space. (Details of all inspections that involved fatalities and injuries are provided in app. IV.) Most of the violations (72 percent) were of general industry standards, including failure to protect workers from electrical hazards (11 percent) and injuries due to inadequate machine guarding (10 percent). (See fig. 6.) Examples of federal contractors who violated electrical and machine-guarding standards include the following: A Dunlop Tire Corp. worksite in Huntsville, Alabama, was cited for inadequate machine guarding after a worker, who placed fabric on a rotating cylinder, got caught in the machinery and died from asphyxia after being wound up inside the fabric. At its Evansville, Indiana, worksite where refrigerators are made, the Whirlpool Corp. was cited for inadequate machine guarding when a worker’s hand and forearm had to be amputated after he got caught while manually feeding coil through a mechanical power press. Exide Electronics Corp., at a worksite in Raleigh, North Carolina, where transformers are produced, was cited for violating electrical standards, when one worker was hospitalized due to electric shock while cleaning consoles with liquid cleaners. The consoles were not disconnected from the power supply. Violations of construction industry standards represented 8 percent of all violations, although this is likely an underestimate because of difficulties we experienced verifying the ownership of worksites engaged in construction (see app. I). Seven percent of all violations were related to inadequate recording or reporting of occupational illness and injuries and 6 percent of violations involved the Hazard Communication Standard. Only 2 percent of all violations involved the General Duty Clause, relied on by OSHA when more specific standards are not applicable. These 261 federal contractors were assessed a total of $24.1 million in proposed penalties and $10.9 million in actual penalties. These penalties represent about one-fourth of the proposed and actual penalties, respectively, for all inspections closed during fiscal year 1994 in which the company was assessed a significant proposed penalty. Although most (76 percent) of all 345 inspections had a proposed penalty between $15,000 and $50,000, the federal contractor was assessed an especially high proposed penalty of $100,000 or more in 8 percent of these inspections (see fig. 7). The 26 inspections in which the federal contractor was assessed a proposed penalty of $100,000 or more in a single inspection are identified in appendixes II and III. The average proposed penalty for all 345 inspections was about $70,000; the average actual penalty for these inspections was about $32,000. The actual penalties for many (63 percent) of the 345 inspections were less than $15,000. In fact, the penalties in many of the 345 inspections were reduced between 40 and 80 percent (see fig. 8). Proposed penalties were reduced to nothing in six inspections of companies, including Amoco Gas Co.; Boston University; C.H. Heist Corp.; Dynalectric; Fletcher Pacific Construction; and Frito-Lay, Inc. (one of its three inspections). In contrast, the actual penalty for Morrison-Knudsen, Corp., Inc., cited for violations committed on a bridge demolition project in New York City, was higher than the proposed penalty. The company agreed to pay a higher penalty in a settlement agreement in which its violations were changed to unclassified. Thirty-nine of the 261 federal contractors were assessed a significant proposed penalty more than once in fiscal year 1994 for violations that occurred at different worksites owned by or associated with the same corporate parent company. Appendix V lists all contractors that were assessed significant proposed penalties in more than one inspection closed in fiscal year 1994. These companies can be large, with multiple worksites across the country, and they sometimes have diversified operations. Examples of these large companies are Boise Cascade Corp.; General Motors Corp.; Georgia-Pacific Corp.; International Paper Co.; Sears Roebuck & Co.; and UPS. General Motors Corp. was assessed significant proposed penalties for safety and health violations in five different inspections in fiscal year 1994. In four of these inspections, conducted at worksites in Ohio and Oklahoma that manufacture motor vehicles, General Motors was cited for violations of hazardous materials, personal protective equipment, electrical work, and machine guarding, among other standards. General Motors also owns Delco Electronics. A Delco facility in Oak Creek, Wisconsin, that manufactures semiconductors and related devices, was cited for lockout/tagout violations—referring to inadequate servicing and maintenance that could lead to a worker injury through the unexpected start-up of machinery. Being assessed significant proposed penalties in multiple inspections could, in part, be explained by the size of the parent company, General Motors Corp., which employs 711,000 workers, has $138 billion in annual sales, and is organized into more than 50 different divisions. Sears Roebuck & Co. was assessed significant proposed penalties for safety and health violations at four different worksites. Three of the four were automotive repair shops in Ohio, New York, and Massachusetts; the other was a general merchandise store in Iowa. The Sears automotive repair shops were cited for violations of the General Duty Clause as well as standards for occupational noise exposure and hazard communication. The merchandise store was cited for violations of standards for materials handling and storage. Like General Motors Corp., Sears Roebuck & Co. is also a large company, employing 249,000 workers with annual sales of $50.8 billion. In addition to its retail operations and its automotive repair division, Sears has other divisions and subsidiaries, including a savings bank. With a total of 24 inspections in which it was assessed a significant proposed penalty, UPS had more significant-penalty inspections closed in fiscal year 1994 than any other contractor in our review. These 24 inspections occurred in facilities providing courier services, both by truck and air, across 10 different states. In most of these inspections, UPS was cited for failing to fully comply with a corporatewide settlement agreement to improve its emergency response to hazardous conditions created when packages are damaged while being transported. Because of OSHA’s concern that UPS failed to fully implement the corporatewide settlement agreement, a supplemental settlement agreement was reached by the two parties. UPS is also a large company, with 285,000 workers and annual sales of $17.7 billion. A review of prior-year inspection records of these federal contractors with significant proposed penalties showed a number of additional inspections, including some that also resulted in significant proposed penalties. Because of omitted corporate identification numbers, we were only able to retrieve prior inspection information for about one-half of the worksites at which significant proposed penalties had been assessed for violations in fiscal year 1994. Nevertheless, we found 221 prior inspections from 1987 through 1993. Nine percent of these worksites had been assessed a proposed penalty of $15,000 or more in these prior inspections. It is possible that there are additional significant-penalty inspections among our 261 federal contractors that we could not retrieve because of missing corporate identification codes. However, OSHA has taken actions to improve its collection of these codes for worksites inspected. A corporate identification code would make it easier for OSHA or a contracting agency to determine whether a company has a history of OSHA violations and if violations have been committed across multiple facilities or worksites owned by the same federal contractor. Although federal contractors were assessed significant proposed penalties because of safety and health violations at some worksites, some of these same contractors operated other worksites that have exemplary safety and health practices. These are worksites that have been extensively evaluated and found qualified to participate in OSHA’s Voluntary Protection Program (VPP). VPP worksites qualify on the basis of OSHA’s review of their application to be a VPP participant and site visits in which OSHA determines if the company maintains a comprehensive safety and health program. These companies are rewarded for their demonstrated commitment to safety and health by their worksite being excluded from OSHA’s inspection lists. OSHA told us that at least three federal contractors we identified as violators operated worksites (although not the worksites assessed significant proposed penalties for violations) that were selected for the VPP program. In addition, for some of the federal contractors we identified, the safety and health violations may reflect a localized worksite compliance problem rather than a systemic corporatewide compliance problem. For example, large companies like General Electric Co., Westinghouse Electric Corp., United Technologies Corp., AT&T, and Exxon Corp., had only one worksite that we identified because significant proposed penalties had been assessed. These companies own many other worksites where there may not be a safety and health compliance problem or, given OSHA’s limited enforcement resources, where there may not have been recent inspections, in which case no information exists to determine if there is a compliance problem. To improve federal contractor compliance, one option is to develop policies and procedures regarding the exchange of information between OSHA and contracting agencies to increase the likelihood that a company’s safety and health record will be considered in contracting decisions. The first option is similar to our recommendation in an earlier report that agencies develop an information sharing approach to facilitate the identification of federal contractors who violate laws that protect workers’ rights to bargain collectively. A second option is for OSHA to consider giving inspection priority to those high-hazard workplaces operated by companies with federal contracts. Before awarding a contract, an agency must make a positive finding that the bidder is responsible as defined in federal procurement regulations. Although this determination primarily focuses on prior contract performance and the financial integrity of a prospective contractor, the agency must also make an affirmative determination that the company is qualified to receive contract awards under applicable laws and regulations, which could include the Occupational Safety and Health Act. Similarly, federal agencies can debar or suspend companies for any “cause of so serious or compelling a nature that it affects the present responsibility of a Government contractor or subcontractor.” Even though federal agencies may deny the awarding of contracts or debar contractors for many different reasons, it appears this authority is rarely exercised for safety and health violations. Aside from the inherent interest of federal agencies in finding or keeping the contractor who is either the lowest bidder or has a history of providing these goods and services to the agency, awarding and debarring officials rarely exercise this authority in part because they lack information as to which contractors are OSHA violators. GSA officials, including members of the Interagency Committee on Debarment and Suspension, which monitors the implementation of debarment and suspension procedures, told us that agency awarding and debarring staff do not routinely receive information about contractors who have violated OSHA regulations. GSA officials also said safety and health information was not routinely collected by agency contract officers when they conduct their pre-award survey to determine whether or not a prospective contractor is responsible. Members of the Interagency Committee told us that the prospect of being debarred or suspended can provide an impetus for a contractor to undertake remedial measures to improve workplace safety and health conditions. Agency debarment and suspension staff could work with the contractors, perhaps with technical support provided by OSHA, to help bring a contractor into compliance, thereby avoiding disruption to the contracting arrangement. GSA officials and Interagency Committee members stressed the importance of maintaining agency discretion in contracting decisions and urged that debarment or suspension for safety and health violations not be mandated. Although our analysis did not include companies receiving other forms of federal financial assistance, such as grants and loans, GSA officials and Interagency Committee members said that safety and health violations should also be considered in debarment or suspension decisions for these companies since these forms of assistance total to large amounts of federal dollars. Federal assistance in the form of grants alone accounted for $225 billion in fiscal year 1995. State and local governments, through which federal grants are distributed, may contract with companies to carry out a wide range of work, including welfare and health care services as well as highway, airport, mass transit, and sewage treatment plant construction. GSA officials and Interagency Committee members said that workers employed by these companies should also be protected from workplace safety and health hazards. However, as is the case with direct federal contracts, agency officials often lack information as to which companies receiving these other forms of federal financial assistance also have OSHA violations. Under the Contract Work Hours and Safety Standards Act (CWHSSA), OSHA also has authority to debar companies specifically for safety and health violations. However, OSHA has not exercised this authority in the past and it appears unlikely that it will increase its exercise of this authority in the future. Although agency officials said they consider debarment when particularly serious violations are committed by a company they can identify as a federal contractor, they prefer to rely on remedies available under the Occupational Safety and Health Act because litigation costs are lower and they can obtain quicker abatement of the hazard. Information can be made available to increase the likelihood that agency officials will make decisions regarding contracts and other forms of federal financial assistance that might improve contractor compliance with OSHA regulations. However, policies and procedures regarding the exchange of information between OSHA and contracting agencies need to be developed. In developing these policies and procedures, a number of issues would need to be resolved. These include the following: Identifying the inspection information regarding violations that OSHA could provide that would facilitate action by agency awarding and debarring officials. Given the large number of federal contractors violating OSHA regulations, there is a danger that excessive or irrelevant information would be generated and transmitted, resulting in a potential administrative burden on both OSHA and awarding and debarring officials within the agencies. OSHA could avoid this problem by developing criteria identifying those federal contractors with exceptionally poor safety and health records and transmitting information only on those companies to awarding and debarring officials. OSHA and the contracting agencies would also have to decide the type and level of detail of information that should be provided regarding these violators and the nature of their violations. Developing the logistics of how OSHA, GSA, the Interagency Committee, and agency awarding and debarring officials could share information. Whether violation information should be provided immediately after any inspection of a contractor in which exceptionally poor safety and health practices are indicated or whether it should be provided at regular intervals for all companies that meet these criteria based on their inspections over a certain period of time needs to be determined. OSHA might choose to work with GSA to determine which of its violators are federal contractors or it may consider leaving this determination to the Interagency Committee or awarding and debarring officials within the agencies. OSHA might also provide information on violators directly to individual agencies with whom the violators contract. Another alternative would be to have either GSA or the Interagency Committee, depending on their relative level of resources, act as a clearinghouse of safety and health compliance information for awarding and debarring officials at all the agencies. As a clearinghouse of compliance information, GSA or the Interagency Committee would need to come up with a strategy for disseminating this information about companies to the appropriate contract awarding and debarring official. If safety and health violations are also going to be considered in debarment or suspension decisions for companies receiving other forms of federal financial assistance (for example, grants and loans), this dissemination strategy would need to include those agency officials who manage these other assistance programs. Finally, regular communication between OSHA and agency debarring officials regarding violations of federal contractors might be facilitated if OSHA had a representative participate in the monthly meetings of the Interagency Committee. Enabling contracting agencies to interpret and use this information effectively. OSHA and agency contract officers could explore how agencies might use the awarding of federal contracts as a vehicle to encourage companies to take more affirmative steps (for example, develop a worksite safety and health program, or participate in voluntary compliance efforts) to improve workplace safety and health. GSA officials and the Interagency Committee members stressed the importance of agency discretion in contracting decisions and that debarment or suspension for safety and health violations should not be mandated. While preserving this discretion, agencies could work with OSHA to develop some kind of guidance as to how to interpret the safety and health records of federal contractors to determine whether or not a contracting action is warranted and, if so, what type of action is warranted. Such guidance, for example, could help agency debarring officials to identify those instances where it might be more appropriate to work with a contractor to facilitate compliance instead of debarring or suspending that contractor. Such situations might vary across agencies and contract type. In addition, OSHA and the contracting agencies might want to determine the kind of technical support, if any, OSHA could provide to help agencies in their efforts to bring a contractor with a poor safety and health record into compliance. Helping contracting agencies determine how closely tied to federal contract dollars the worksite with violations must be to warrant taking an adverse contract action. Sometimes a safety and health problem might be localized or confined to a specific worksite. Thereby, taking a contract action against the federal contractor might be appropriate only if that particular worksite receives contract dollars. On the other hand, a systemic corporatewide compliance problem may be indicated if there are violations across many worksites owned by or associated with the same federal contractor. In such cases, a contracting action against the company as a whole may be appropriate. However, if the operations of a large company are very diverse, compliance efforts for a safety and health problem in one part of the company might have little relevance to other parts of the company where safety problems, if there are any, might be very different. OSHA might improve contractors’ safety and health compliance by giving inspection priority to those high-hazard workplaces operated by companies receiving federal contracts. For example, a company might be more willing to abate hazards and pay penalties quickly if it is made aware that contracting actions could be taken against it. OSHA has recently launched an initiative to improve its inspection targeting system so that instead of treating employers in a certain industry alike, OSHA will focus its resources on specific worksites where employers ignore safety and health regulations and put their employees at risk. The rationale is to increase the likelihood that its limited resources will be spent inspecting worksites more likely to have hazards. Following the principle of placing greater responsibility on federal contractors for compliance with laws and regulations, OSHA could consider adding to its criteria for targeting inspections the presence of contract dollars. If a company’s worksite, for example, were already identified by OSHA’s targeting system because of meeting hazard-related criteria, OSHA might want to make sure to inspect such a worksite if the company also received federal contracts. In considering whether to do so, OSHA would have to address several issues: The appropriateness, from a policy standpoint, of including federal contract status among criteria it considers in prioritizing inspections. The amount of emphasis to give to this criteria and how to combine it with others (OSHA might want to consider this only after the worksite already met OSHA’s hazard-related criteria because of, for example, a high number of injuries or illnesses or a history of violations). How closely tied to federal contract dollars must the worksite be to warrant an inspection because it is a federal contractor. (For example, is it necessary that federal dollars are being awarded to this worksite or only that the company which owns this worksite is receiving federal contract dollars?) The federal government awarded $38 billion in federal contracts during fiscal year 1994 to at least 261 corporate parent companies that owned worksites where there were safety and health hazards. Although unaware of their contractor status, OSHA identified these compliance problems through its ongoing enforcement efforts and maintains information regarding the nature of the violations, the fatalities and injuries associated with the violations, and the penalties assessed. Many federal agencies across government already have the authority to debar or suspend federal contractors for the violation of safety and health regulations. The prospect of debarment or suspension can also provide impetus for a contractor to undertake remedial measures to improve workplace conditions. Agencies could use the awarding of federal contracts as a vehicle to encourage companies to take more affirmative steps (for example, develop a worksite safety and health program, or participate in voluntary compliance efforts like Maine 200) to improve workplace safety and health. Given the complexity of federal procurement regulations and processes and individual agencies’ familiarity with the specific companies and contracts involved, they are probably in a better position than OSHA to make each contracting decision. However, agency awarding and debarring officials have not taken actions against contractors for safety and health violations at least partially because they did not have the information to determine which federal contractors have violated safety and health regulations, even when they have been assessed high penalties for willful or repeat violations or cited under OSHA’s “egregious” policy. The considerable number of federal contractors with OSHA violations, even in the single year we examined, suggests that policies and procedures should be developed to facilitate the exchange of information between OSHA and agency awarding and debarring officials to help improve federal contractor compliance. Also, contractors might be more attentive to their safety and health practices if OSHA were to give inspection priority to those high-hazard workplaces operated by federal contractors. We recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to develop and implement policies and procedures, in consultation with GSA and the Interagency Committee on Debarment and Suspension, for how safety and health records of federal contractors could be shared to better inform agency awarding and debarring officials in their decisions regarding contracts in order to improve workplace safety and health. develop policies and procedures regarding whether and how it will consider a company’s status as a federal contractor in setting priorities for inspecting worksites. assess the appropriateness of extending these policies and procedures to cover companies receiving other forms of federal financial assistance, such as grants and loans. We obtained comments on a draft of this report from the Department of Labor, GSA, and the Interagency Committee on Debarment and Suspension. Labor noted that our findings reflected positively on OSHA’s enforcement efforts because the companies we identified as receiving federal contracts were already being cited for violations at some worksites under OSHA’s existing compliance program. Labor stated that federal contractors, like other employers, have a responsibility for providing employees with a safe and healthful workplace. Labor also agreed that the exchange of information between OSHA and GSA could make additional compliance strategies available to OSHA at the worksites of federal contractors and could be consistent with OSHA’s effort to reinvent its enforcement policies and procedures. However, Labor officials also suggested that our recommendation regarding the exchange of information on inspections and contracts be directed to GSA because they believe that GSA is in a better position to affect agency contracting actions. Officials expressed greater concern about our recommendation to use federal contractor status as one criterion in OSHA’s prioritizing of inspection resources. Labor officials said that the report does not provide evidence that federal contractors have a worse compliance record than other employers. Because OSHA’s inspection targeting program, consistent with the administration’s National Performance Review (NPR), is intended to focus OSHA’s limited enforcement resources toward worksites where the greatest safety and health hazards exist, introducing the criterion of whether or not a company received federal contracts could divert resources toward worksites with less serious hazards. Although coordination among all parties is necessary, we directed our recommendations to Labor because we believe that OSHA is the appropriate starting point for the initiation and development of any information exchange on federal contracts and OSHA inspections. OSHA is the primary federal agency responsible for workplace safety and health and it maintains detailed information on the inspections conducted throughout the nation, including the nature and severity of the violations detected. In contrast, although GSA maintains information on federal contracts, the contracting function itself is diffused among many individual agencies and departments. Therefore, our recommendations recognize GSA as instrumental in facilitating the sharing of information between OSHA, which maintains the safety and health compliance information, and agency awarding and debarring officials, who can use this information in their contracting decisions. Regarding Labor’s concerns about OSHA’s allocation of its inspection resources, we acknowledge that including federal contractor status as an additional criterion in OSHA’s prioritization of inspections raises several issues, including its appropriateness from a policy standpoint and how such a criterion would be operationalized. However, we view the use of federal contractor status as a criterion to be implemented in addition to and not in lieu of other criteria identifying high-hazard workplaces. We also recognize that Labor, upon conclusion of its review, may determine that federal contractor status should play only a minor role in OSHA’s prioritization of resources. In addition, given our requesters’ interests and the formidable data limitations facing such an analysis, we did not seek to assess federal contractors’ overall compliance record as compared with other employers. Instead, we sought to determine whether companies receiving federal contracts had also been assessed significant proposed penalties for safety and health violations. Our finding that 16 percent of all the significant-penalty inspections closed in fiscal year 1994 involved federal contractors suggests that the inclusion of contractor status as a priority criterion could enhance OSHA’s ability to ensure safe and healthful working conditions for U.S. workers. Officials from GSA and members of the Interagency Committee on Debarment and Suspension also generally agreed with the report’s findings and concurred that information on OSHA inspections of firms receiving federal contracts would be useful to agency awarding and debarring officials’ decisions. Members of the Interagency Committee also suggested that having an OSHA representative participate in the monthly meetings of the Interagency Committee would be very useful to the entire information-sharing process. Although GSA officials and Interagency Committee members believe that the recommendation regarding the exchange of information has merit, they said that the report appears to confuse the roles that OSHA, GSA, the Interagency Committee, and agency awarding and debarring officials would play in its implementation. These officials believe that the report places too much responsibility for the safety and health compliance of federal contractors on GSA and the Interagency Committee. On such matters, they believe that only OSHA has sufficient expertise to implement a health and safety compliance program. They stated that officials involved in awarding contracts or debarring contractors have little technical expertise in OSHA compliance matters and would not be knowledgeable about the appropriate remedial measures that, in the OSHA context, would be sufficient. In addition, although GSA officials and Interagency Committee members agreed that they can help disseminate OSHA inspection information, they have few resources to perform other more elaborate tasks such as the dissemination of detailed OSHA compliance information. Interagency Committee members, in particular, said that they lack staff and administrative support that would be necessary for it to serve as a clearinghouse of OSHA contractor compliance information. Interagency Committee members also stated that the committee’s authority is limited to coordinating the assignment of lead agency responsibility when more than one agency has an interest in a particular contractor and it cannot assign this responsibility. Finally, because the Interagency Committee is composed only of debarring officials, it has no direct link to awarding officials that could limit its role in facilitating the flow of violation information to agency contract officers. GSA officials and Interagency Committee members also pointed out that debarment and suspension actions, because they can have a serious impact on a contractor’s business life, can provide an impetus for a contractor to take remedial measures. However, they stated that it would be inappropriate and run counter to procurement regulations to use debarment or suspension to threaten a contractor, even one with an egregious safety record. To further clarify the roles of OSHA and other parties on this matter, GSA officials suggested that an appropriate sequence implementing this recommendation would be for OSHA to establish with the contractor the appropriate compliance program and then provide information on the case to the contracting agency’s debarring official for review of the contractor’s overall responsibility. We did not specify the precise roles that OSHA, GSA, and other parties should play in facilitating the exchange of information because we believed that it was best that the flexibility be available to ensure that any arrangement developed would minimize the burden for all parties. However, we agree with GSA officials and members of the Interagency Committee that OSHA should be the primary agency concerned with health and safety regulatory compliance. We also believe that GSA and the Interagency Committee are better positioned than OSHA to identify which violators receive federal contracts and to help disseminate information on OSHA inspections to federal awarding and debarring officials throughout the government. Awarding and debarring officials within the individual agencies, after review of OSHA inspection information, would then be able to make more informed decisions. Under such a procedure, agency discretion could be preserved so that awarding and debarring officials could provide the appropriate impetus for improvement to federal contractors while avoiding unnecessary procurement disruptions. We also note that, in all cases, OSHA would not be precluded from using its own authority to cite employers for violations, monitor abatement efforts, or take other available actions. We also agree that debarment or suspension should not be used as a means to punish individual contractors and the report does not recommend this. Instead, agencies could use OSHA inspection information to ensure that they comply with the requirement in federal procurement regulations that agencies contract only with firms that are responsible—in compliance with applicable laws and regulations, including the Occupational Safety and Health Act. As GSA officials note, the prospect of debarment or suspension because of corporate irresponsibility can provide the impetus for a contractor to undertake remedial measures to eliminate workplace hazards that could cause employees injury or illness, thus improving the protection afforded to them. Labor, GSA, and the Interagency Committee also provided us with technical suggestions, which we incorporated where appropriate in the final report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor, the Assistant Secretary for Occupational Safety and Health, the Administrator of GSA, the Chairman of the Interagency Committee on Debarment and Suspension, the Director of the Office of Management and Budget, relevant congressional committees, and interested parties. We also will make copies available to others on request. If you or your staff have any questions concerning this report, please call Charlie Jeszeck, Assistant Director, at (202) 512-7036 or Jackie Baker Werth, Project Manager, at (202) 512-7070. We were asked to (1) determine how many companies receiving federal contracts have also been assessed penalties for violations of occupational safety and health regulations, (2) describe the characteristics of these contractors and their contracts, (3) describe the kinds of violations for which these contractors were cited, and (4) identify ways to improve contractor compliance with workplace safety and health requirements. The scope of our work included the following: Matching violation data from OSHA’s database of inspection results (IMIS) with a database of federal contractors maintained by GSA for fiscal year 1994, referred to as the FPDS. We restricted our analysis to those OSHA inspections that were closed in fiscal year 1994 in which the proposed penalty assessed by the OSHA compliance officer was what we defined as significant, regardless of the amount of the actual penalty recorded when the inspection was closed. We considered the proposed penalty assessed by the compliance officer to have been significant if it was $15,000 or more. Verifying by telephone that the company listed in IMIS was the same company (or owned by the same parent company) listed in FPDS. Analyzing FPDS for the dollar value of the fiscal year 1994 contracts received by the violator or its parent company and the federal agencies that awarded the contracts. Analyzing IMIS for characteristics of the violations and the worksites inspected. Meeting with compliance staff at OSHA and with federal contracting officials at GSA and other agency experts in procurement. The IMIS database includes over 2 million inspections from 1972 to 1995, and over 100,000 were closed in fiscal year 1994 alone. IMIS includes such information as to whether or not the inspections were performed by OSHA or a state-operated program, penalty amounts (proposed and actual), the type of violation (for example, serious, willful, or repeat), the standards violated, whether fatalities or injuries occurred, and abatement information. In addition, IMIS includes some data on the worksite inspected, including the industry it is engaged in and the number of workers. IMIS is structured so that key inspection data (with a unique identifier, referred to as the activity number) are contained in the stem and more detailed data in segments. The violation segment, for example, includes information on specific violations for which the worksite was cited and the types of violations committed (serious, willful, and repeat). Another segment, referred to as the accident segment, includes details on, among other things, the number of workers injured and degree of injury. In capturing violation data, violations are often grouped together when they are related. For example, detailed violations in which the employer was cited for inadequate locks to secure machines and failure to perform periodic inspection of machinery could be grouped together under the primary violation of lockout/tagout. Lockout/tagout refers to a number of requirements for the maintenance of machines and equipment to protect against their starting up unexpectedly. Similarly, when reporting actual penalties, we accumulated only those penalties attached to the primary member of a group of violations (including penalties for individual violations only if they were not members of a group). In fiscal year 1994 alone, FPDS tracked information on 179,977 contracts and 477,648 contract actions, totaling $176 billion. FPDS contains a variety of information, including the contractor’s name and location, contract amounts awarded, agency the contract is with, principal place of contract performance, and products and services provided. To determine which federal contractors were OSHA violators, we matched IMIS with the FPDS. We chose to restrict our matching process to inspections resulting in proposed penalties of at least $15,000 (regardless of the amount of the actual penalty recorded when the inspection was closed). The proposed penalty is the penalty issued by OSHA in the original citation and reflects the compliance officer’s judgment of the nature and severity of violations. We restricted the matching process in this way so that we would include in our analysis only those companies whose safety and health violations resulted in proposed penalties that we defined as significant, and a manually matching procedure would be feasible. A manual process was necessitated because of missing corporate identification codes in IMIS, which precluded an automated matching procedure. Only by limiting the size of one of the two databases, IMIS in this case, was a manual matching process possible. Discussions with OSHA officials, including IMIS specialists, helped us identify ways to limit the size of IMIS. We decided to use only one fiscal year of inspection data (1994) for cases that had already closed because we would be certain that the actual penalty and disposition of any inspection would not change. We also applied several other conditions, including that at least one violation was cited. A proposed penalty is a compliance officer’s judgment of the nature and severity of violations and, according to OSHA officials, is a better reflection of the seriousness of the citations than actual penalties because actual penalties are a product of other factors such as negotiations between OSHA and the company to encourage quicker abatement of workplace hazards. The criteria of $15,000 or more in proposed penalties resulted in a total of 2,113 inspections. This, we determined, would be a small enough number of inspections to feasibly match against the larger FPDS. These 2,113 inspections represent only 3 percent of all closed fiscal year inspections. We referred to these inspections as those in which the company was assessed significant proposed penalties for OSHA violations. A manual matching procedure was necessitated by missing corporate identification codes in IMIS for many of the establishments inspected, precluding an automated matching procedure. IMIS includes a field for a company’s Dun & Bradstreet code. However, at the time that we initiated this review, the Dun number was provided in only 20 percent of the 72,950 inspections closed in fiscal year 1994. We manually compared each company name among the selected 2,113 inspections in IMIS with the larger FPDS, identifying those company names which were identical or nearly identical. Because companies may split up, merge, subcontract, operate subsidiaries, or change names, the company might have appeared under different names in the IMIS and the FPDS and thereby escaped our detection. Through manual matching, we identified 499 inspections (nearly one-fourth of the 2,113 inspections) in which the company names were identical or nearly identical. We eliminated some of these 499 inspections either because our telephone verification revealed that the company listed in IMIS was not the same company as listed in FPDS or because we were unable to verify the match. A total of 345 inspections, involving 261 federal contractors, resulted because some of the federal contractors owned more than one inspected worksite. This represents 16 percent of all 2,113 inspections closed in fiscal year 1994 in which a significant proposed penalty was assessed for OSHA violations. How cases were eliminated is described below. (See fig. I.1.) 4% Not a Federal Contractor per Telephone Verification 3% Unable to Verify Violators Receiving Federal Contracts (345 Inspections) To ensure that a company listed in IMIS was the same company (or owned by the same parent company) as the company listed in FPDS, we telephoned the worksite where the OSHA violations occurred. We verified that the company name and worksite locations, identified in both databases, referred to the same company or were owned by the same parent company. If there was more than one worksite under the same or identical name in IMIS (indicating that violations may have occurred at different worksites owned by or associated with the same parent company), we verified that all these worksites were owned by the parent company. We also asked the contact to provide the parent company name or, if a parent company name was included in FPDS, to verify that name. We eliminated from our matched companies those for which the telephone call revealed that the company listed in IMIS was not the same company as listed in FPDS (83 worksites representing 4 percent of the 2,113 inspections). We also eliminated companies (71 worksites representing 3 percent of the 2,113 inspections) because we were unable to verify the match for a variety of reasons. Some companies went out of business or relocated, or the location information in IMIS or FPDS was either incomplete or inaccurate. We also eliminated worksites when we were told they were organized as a franchise and the parent company exercised little oversight over the franchised worksites. The greatest portion of worksites that we could not verify were engaged in construction (52 percent). We believe that because worksites in this industry are often temporary—existing only for the duration of a construction project—the employer, in our telephone contacts, could not always recall if such a worksite existed when the inspection was conducted. The 345 inspections of worksites verified as being owned by federal contractors include 65 that we decided did not require telephone verification because the company names and worksite locations in IMIS and FPDS matched exactly. We analyzed FPDS for the dollar value of the fiscal year 1994 contracts received by the corporate parent companies of the violators. Therefore, when referring to a federal contractor in our report, we are referring to the parent company. For the 345 matched companies, we used only variations of the company name and worksite locations that were verified by telephone to retrieve fiscal year 1994 contract information from the FPDS.This was a conservative approach to ensure that we were not attributing more contract dollars to that company than were verified. We found it necessary to report federal contract award data for violators by parent company for several reasons: First, FPDS data did not enable us to confirm whether a company’s contract activity occurred at the same worksite where the company was cited for safety and health violations. FPDS data on principal place of performance include city and state information but not a street address, which is needed to confirm a match to the worksite level. Also, the location that receives the largest dollar share of the contract is listed as the principal place of performance. Moreover, if the place of performance cannot be determined, the contractor’s billing location is used instead. Second, it would have been difficult to get companies to confirm whether or not they conduct federal contract work at the particular worksite where the violations occurred. This information might not be readily available or might be considered confidential or proprietary. Third, the nature of some contract work is so dispersed (for example, interstate transportation of freight), with contract activity of some form occurring across multiple worksites, that it would have been difficult for even the company to verify exactly what activities at various worksites were supported by federal contracts. Even when focusing our analysis on the agency from which most contract dollars were awarded to a particular company, there were often many corresponding places of performance and products and services provided to this agency. The 345 inspections involved 261 federal contractors because some federal contractors owned more than one inspected worksite. For each of the 261 federal contractors, we checked to ensure that any corporate identification code was not shared by another federal contractor we had verified as a violator. If there was a shared corporate identification code, we made sure that we had confirmed, during our telephone verifications, that these worksites were owned by the same federal contractor to preclude double counting contract awards. Using FPDS, we examined total contract dollars awarded by each federal agency. We also ran a distribution of contract dollars to determine the number of federal contractors by the size of contract awards. We did not determine the extent to which OSHA violators were federal subcontractors (companies who receive a portion of the contract award through a primary federal contractor) with violations because we could not identify subcontractors. We analyzed IMIS for characteristics of the violations cited in these inspections. We ran distributions on a number of data fields, tabulating the data by the 345 matched inspections where possible, or the 5,121 violations associated with these inspections if the data did not lend themselves to presentation by inspection. Even though all of these 345 inspections were closed in fiscal year 1994, many may have been conducted years before. Some inspections can take many years to resolve. Only 20 percent of the 345 inspections were opened and closed within fiscal year 1994, 45 percent were opened in fiscal year 1993, and 35 percent were opened in fiscal years 1986 through 1992. As a result, a company may not have been receiving federal contracts at the same time that it violated the act. Another limitation to this review is that companies may have changed their safety and health practices, particularly if a long time has elapsed between the opening and closing of an inspection. This means that worksites with poor safety and health practices when the inspection was opened may have improved their practices by the time the inspection was closed, as a result of the inspection or other factors. Employee complaints were the most common reason these 345 inspections were conducted (41 percent). Programmed inspections, which include inspections in construction and other high-hazard industries, were the next most common (27 percent) reason given for inspections. Fatalities or catastrophes (referring to at least one fatality and the hospitalization of at least three workers) led to 13 percent of these inspections. The other 9 percent of inspections included follow-up inspections to determine if previously cited violations had been corrected and monitoring inspections to ensure that hazards were being corrected whenever a long period of time was needed to come into compliance. Referrals from any source, including media reports, led to 9 percent of these inspections. Although OSHA’s first priority for conducting an inspection is if there is an alleged imminent danger situation, none of our 345 matched inspections was conducted for this reason. (See fig. I.2.) We discovered some inconsistencies in accident data when comparing different sources of data. The primary source of accident data is in the IMIS accident segment, providing data on the number of workers killed or injured and the degree of injury, among other information. However, investigation summaries, referring to accident abstracts submitted by OSHA compliance officers, referred to fatalities or injuries not always recorded in the accident segment. In addition, some violations were coded in a special manner to indicate that they were related to a fatality or catastrophe, yet there was not a corresponding accident segment or investigation summary. We reconciled these inconsistencies by conducting follow-up telephone calls to the OSHA area offices that had conducted the inspection. In many of these inspections, a fatality or injury had occurred.The results of these follow-up calls are reflected in the number of fatalities and injuries and in the descriptions of the accidents, which occurred at the worksites of 50 federal contractors. We performed a special tabulation for types of violations. Because the types of violations (serious, willful, repeat, and unclassified) are captured not by inspection but by violations only, our special tabulation involved developing counts by inspection when there was at least one violation of that particular type. We also performed a special tabulation to determine how many inspections involved additional penalties assessed under OSHA’s “egregious” policy and the specific standards violated. We also ran data for all worksites to determine whether a company had been penalized for failing to abate a hazard. We ran distributions of penalties, both of total proposed penalties and total actual penalties for our 345 inspections. To capture the degree to which proposed penalties were reduced, we ran a distribution of the percentage difference between each proposed and actual penalty. Finally, we ran distributions by standards violated, focusing on those standards in which the greatest number of violations in these 345 inspections fell. We chose not to report the disposition of inspections, referring to the level of review at which a contested inspection was resolved—formal settlement agreement, administrative law judge decision, or by OSHRC commissioners’ decision. After requesting copies of decisions from OSHRC on those inspections in which violations were coded as being resolved by its commissioners, we found that many of these cases had actually been resolved before reaching this level of review by an administrative law judge’s decision. We found disposition coding errors of this nature among inspections conducted by both federal OSHA and state-operated programs. However, we did review all administrative law judge decisions in the federal OSHA cases to make sure that the types of violations reported and the actual penalties for which the company was assessed accurately reflected the review by the administrative law judge. We also used IMIS to characterize the worksite where the inspection occurred. OSHA staff told us that the more reliable data on the number of employees was the number at the worksite. We also ran a distribution on the primary industry the worksite was engaged in, relying on SIC codes captured for each worksite. We used more detailed codes within the SIC classification system when reporting on individual worksites. To describe the federal contractors (or parent company) that own the worksites inspected, we gathered number of employees and annual sales data for selected companies—those that were assessed significant proposed penalties in more than one inspection closed in fiscal year 1994. OSHA staff helped us to determine whether some of the worksites owned by federal contractors that had been assessed significant proposed penalties had a history of violations. OSHA staff, using corporate identification codes for worksites inspected, performed a search of IMIS to retrieve prior-year inspections at these same worksites. Because of missing corporate identification numbers, OSHA was only able to retrieve prior-year inspection information on about one-half (197) of the worksites. We ran a distribution by proposed penalty to determine if some of these prior inspections resulted in significant proposed penalties of $15,000 or more. We also asked OSHA staff to review our list of 261 federal contractors who own worksites with safety and health violations to determine whether any of their 345 inspections were criminally prosecuted by OSHA or, conversely, whether any of these federal contractors were participants in OSHA’s VPP because of exemplary safety and health practices. While OSHA staff determined that none of the 345 inspections was criminally prosecuted, they reported to us that some of these federal contractors did have worksites (other than those assessed significant proposed penalties for safety and health violations) that were VPP participants. To explore ways to improve compliance of federal contractors with OSHA, we met with OSHA officials in the Directorate of Compliance Programs, because of their enforcement responsibilities, and Labor’s Office of the Solicitor. We also met with contracting officials at GSA and the Interagency Committee on Debarment and Suspension, which coordinates suspension and debarment activities governmentwide. We also met with computer and technical staff in OSHA headquarters as well as officials in its San Francisco regional office. We conducted our work from July 1995 to July 1996 in accordance with generally accepted government auditing standards. Table II.1 provides key characteristics of inspections and contracts of the 261 federal contractors assessed significant proposed penalties for violations of safety and health regulations. Our definition of a significant penalty is a proposed penalty of $15,000 or more regardless of the size of the actual penalty recorded when the inspection was closed (either because the employer accepted the citation or a contested citation was resolved). The proposed penalty is the penalty issued by OSHA in the original citation and reflects the compliance officer’s judgment of the nature and severity of violations, while the actual penalty may be the product of other factors such as negotiations between OSHA and the company to encourage quicker abatement of workplace hazards. Because some of these 261 federal contractors own more than one worksite inspected, a total of 345 inspections appear in the table. In reporting fiscal year 1994 contract dollars, we are referring to the federal contractor (or parent company), which is identified if it is different from the name of the worksite where the violations occurred. The violations may have occurred at only one worksite or facility, possibly within a division or subsidiary, of the federal contractor and not necessarily where the contract activity was performed. Inspection information includes the location of the worksite inspected and the activity number of the inspection that is assigned in IMIS. We have provided both the proposed and actual penalties. We have reported those standards violated that are associated with the highest actual penalty as well as standards that reportedly contributed to a fatality or injury when different than the former. In summarizing the fatality or injury, we referred to investigation summaries submitted by OSHA compliance officers or follow-up calls to local OSHA offices when other data in IMIS indicated an accident had occurred but no summary was available. To provide selected characteristics of violations, we reported whether violations included at least one violation that was willful, repeat, or serious and whether the company was assessed penalties under OSHA’s “egregious” policy or for failing to abate a hazard. If a proposed penalty of $100,000 or more was assessed for safety and health violations (which was the case in 26 of these inspections), an asterisk appears by the activity number of the inspection. If an inspection was conducted by a state-operated safety and health program (which was the case in 71 of these inspections), a special symbol (=) appears by the activity number of the inspection. Table II.1: Characteristics of the Inspections and Contracts of 261 Federal Contractors Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) A.H.A. General Construction ($1,180,000 in contracts) $22,050 ($12,000) 5 workers were hospitalized due to fall when floor of building, which was not shored or braced, collapsed during demolition. A.A.R. Engine Component Services (A.A.R. Corp.; 46,224,000 in contracts) 33,000 (15,750) A.B.B. Combustion Engineering Nuclear (A.B.B. A.S.E.A. Brown Boveri, Ltd.; 100,882,000 in contracts) 20,775 (15,900) Acme Steel Co. (Acme Metals, Inc.; 310,000 in contracts) 83,000 (62,250) Means of egress; hazardous materials; personal protective equipment; general environmental controls; lockout/tagout; toxic and hazardous substances 1 worker died, another was hospitalized, from exposure to blast furnace gas due to equipment failure at a steel mill. Alamo Transformer Supply Co. (2,000 in contracts) 30,000 (9,500) Albany International Corp. (214,000 in contracts) 38,250 (25,000) 1 worker was hospitalized and died 4 days later after being crushed in a weaving loom at this textile plant. Alcan Toyo America (Toyo Aluminum KK; 512,000 in contracts) 16,750 (9,000) General duty clause; personal protective equipment 1 worker died from burns when a mixer containing aluminum powder exploded at this primary metals production plant. Alder Construction Co. (18,811,000 in contracts) 20,500 (20,500) General safety and health provisions; fire protection and prevention; occupational health and environmental controls; personal protective and lifesaving equipment 1 worker died due to a propane explosion when he entered a confined space, where the atmosphere had not been tested, with a lighted torch. All American Poly Corp. (13,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 52,000 (20,000) All-Steel, Inc. (B.T.R. PLC; 41,816,000 in contracts) 22,500 (10,000) 26,000 (13,000) Allied Tube and Conduit (Tyco International, Ltd.; 17,697,000 in contracts) 22,800 (8,700) Occupational health and environmental control 137,500 (40,000) Machinery and machine guarding 3 workers lost fingers or parts of fingers, and a fourth worker fractured several fingers. Their fingers were either crushed or cut by machinery at this electric wiring facility. A fifth worker was hospitalized after being pinned between a forklift and a parking cart. Willful; repeat; serious 20,700 (12,000) Aluminum Co. of America (4,785,000 in contracts) 59,850 (26,910) Materials handling and storage; machinery and machine guarding 15,000 (10,000) 1 worker died after he was crushed inside of a truck that he operated for this metal smelting and refining plant. The truck ran off the road and rolled upside down, in part because his vision was obstructed due to the truck’s design. Amcor, Inc. (C.R.H. PLC; 342,000 in contracts) 20,000 (11,000) Amoco Gas Co. (Amoco Corp.; 400,000 in contracts) 37,500 (0) 9 workers were hospitalized for burns due to an explosion of a natural gas pipeline. The Arbors at Fairmont (Arbor Health Care Co.; 948,000 in contracts) 22,950 (3,475) Arco Alaska, Inc. (Atlantic Richfield Co.; 239,137,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 15,000 (7,500) Process safety management;standards of state-operated program 1 worker was hospitalized and 4 other workers were injured due to a flash fire in a tank. Sparks from a welding or cutting operation ignited gases in a pipe that was inadequately purged at this petroleum and natural gas facility. Asplundh Tree Expert Co. (1,284,000 in contracts) 64,950 (18,000) 2 workers were hospitalized due to contact with a light pole that hit high-voltage lines when they were reinstalling it for this power line construction company. AT&T Communications (AT&T; 873,855,000 in contracts) 15,750 (4,875) Avondale Industries, Inc. (111,789,000 in contracts) 22,300 (9,189) 214,000 (50,000) 19,500 (1,000) Lockout/tagout; materials handling and storage; electrical; hazard communication standard 35,000 (25,000) Basler Electric Co. (373,000 in contracts) 30,650 (9,975) Occupational health and environmental control Bath Iron Works Corp. (Fulcrum II Limited Partnership; 797,629,000 in contracts) (580,000) Batson-Cook Co. (797,000 in contracts) 33,500 (21,775) Baxter Health Care Corp. (Baxter International, Inc.; 12,421,000 in contracts) 22,000 (22,000) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Bell Helicopter Textron, Inc. (Textron, Inc.; 1,201,959,000 in contracts) 20,000 (5,000) 1 worker was killed and another hospitalized due to overexposure to sulfuric acid in a confined space. Bender Shipbuilding & Repair Co. (14,749,000 in contracts) 65,050 (33,023) Berning Construction, Inc. (93,000 in contracts) 15,075 (7,575) Bethlehem Steel Corp. (1,729,000 in contracts) Biocraft Laboratories, Inc. (17,000 in contracts) 16,500 (10,000) Walking-working surfaces; means of egress; hazardous materials; lockout/tagout; machinery and machine guarding 40,000 (14,750) 18,000 (6,500) Blaze Construction Co. (2,208,000 in contracts) 45,200 (24,574) 67,500 (31,776) Blue Bell Creameries USA, Inc. (103,000 in contracts) 16,200 (8,625) Boeing (The Boeing Co.; 1,287,941,000 in contracts) 83,225 (43,100) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 57,700 (26,200) Boise Cascade Corp. (400,000 in contracts) 984,900 (476,100) Egregious; willful; serious 602,700 (273,900) Egregious; willful; repeat; serious 82,000 (7,000) 21,200 (9,200) General duty clause; walking-working surfaces; machinery and machine guarding; electrical Boston University (of Boston University Trustees; 7,667,000 in contracts) 18,925 (0) Bowman Apple Products Co., Inc. (148,000 in contracts) 35,850 (9,250) Brown & Root (Halliburton Co.; 302,113,000 in contracts) 20,000 (5,000) Process safety management; personal protective equipment 1 worker died, 2 workers were hospitalized, due to gas exposure while doing maintenance work on a pipeline for this special trades contractor. Browning-Ferris Industries, Inc. (5,623,000 in contracts) 18,700 (8,260) Burns & Roe Services Corp. (Burns & Roe Enterprises, Inc.; 103,403,000 in contracts) 25,500 (12,750) Burron Medical, Inc. (B. Braun Melsungen A.G.; 228,000 in contracts) 52,850 (28,650) 30,000 (0) Campbell Soup Co. (12,053,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 52,000 (26,000) Cargill Inc. (Tyson Foods, Inc.; 139,924,000 in contracts) 15,300 (9,180) Toxic and hazardous substance; hazard communication standard 1 worker was injured when he mixed together unmarked chemicals that subsequently exploded. The worker was cleaning at this poultry processing facility. Center Core, Inc. (CenterCore Group; 7,575,000 in contracts) 16,200 (9,720) Centric Jones Construction (Centric Jones Co.; 15,041,000 in contracts) 16,650 (6,250) Century Concrete Services, Inc. (1,315,000 in contracts) 21,000 (8,875) Certified Coatings (Certified Coatings of Cal; 260,000 in contracts) 29,125 (13,250) Chevron USA (Chevron Corp.; 250,851,000 in contracts) 18,850 (6,100) Children’s Hospital Medical Center (170,000 in contracts) 21,250 (7,000) Chomerics, Inc. (Parker Hannifin Corp.; 1,117,000 in contracts) 18,000 (9,125) Chrysler Motors Corp., K (Chrysler Corp.; 314,074,000 in contracts) 106,600 (27,553) Machinery and machine guarding; lockout/tagout Cincinnati Milacron Resin Abrasion (Cincinnati Milacron, Inc.; 2,968,000 in contracts) 18,000 (9,310) Clean Harbors of Kingston, Inc. (Clean Harbors Environmental Services, Inc.; 456,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 156,000 (60,000) 1 worker died because his co-workers were unable to retrieve him from a tank containing a chemical sludge when his air supply ran low. He was cleaning the tank for this facility that provides refuse collection and disposal services. Cleveland Construction, Inc. (31,000 in contracts) 39,800 (10,000) Colgate-Palmolive Co. (3,734,000 in contracts) 15,300 (9,690) ConAgra, Inc. (also owns Longmont Foods; 149,606,000 in contracts) 15,000 (12,500) 35,550 (22,250) Walking-working surfaces Consolidated Edison Co. of New York (21,053,000 in contracts) 27,000 (20,250) Occupational health and environmental controls Consolidated Grain and Barge Co. (C.G.B. Enterprises, Inc.; 4,865,000 in contracts) 22,500 (10,625) Cornell University Press (Cornell University; 7,764,000 in contracts) 19,100 (11,000) Walking-working surfaces; means of egress; medical and first aid; materials handling and storage; hazard communication standard Coyne Textile Services (Coyne International Enterprises Corp.; 257,000 in contracts) 15,000 (4,000) Crane & Co., Inc. (69,574,000 in contracts) 25,925 (13,175) Machinery and machine guarding; special industries 48,000 (2,500) Crowley Maritime Corp. (27,991,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 63,500 (40,500) Occupational health and environmental control 40,500 (24,125) Crown American (Crown Holding Co.; 994,000 in contracts) 15,300 (10,000) Crown Central Petroleum Corp. (also owns La Gloria Oil & Gas Co.; 29,661,000 in contracts) 30,000 (12,500) D.J. Manufacturing Corp. (5,373,000 in contracts) 43,750 (22,750) Machinery and machine guarding; electrical 41,400 (19,800) Standards of state-operated program; machinery and machine guarding 21,250 (11,390) Delco Electronics (See General Motors Corp.) 35,125 (6,000) Dell Computer Corp. (4,163,000 in contracts) 20,700 (10,350) 16,200 (8,100) Detroit Diesel Corp. (Penske Corp.; 23,211,000 in contracts) 19,500 (9,750) Diamond Shamrock Refining & Marketing (Diamond Shamrock, Inc.; 48,880,000 in contracts) 31,000 (22,500) Dick Enterprises, Inc. (56,448,000 in contracts) 35,500 (2,300) Domermuth Petroleum Equipment & Maintenance (J. Myles Group, Inc.; 241,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 18,400 (8,940) Donohoe Construction Companies (Donohoe Companies, Inc.; 11,662,000 in contracts) 21,375 (5,250) Dreadnought Marine, Inc. (15,272,000 in contracts) 15,125 (6,325) Duncan-Smith, Inc. (70,000 in contracts) 19,350 (12,578) General safety and health provisions; personal protective and lifesaving equipment; materials handling, storage, use, and disposal; cranes, derricks, hoists, elevators, and conveyors; motor vehicles, mechanized equipment, and marine operations 1 worker drowned when he jumped off a barge, without a life preserver, because he was frightened when it began to rock back and forth. The rocking action started when a sling broke as workers were pulling pilings out of the channel for this demolition or wrecking company. Dunlop Tire Corp. (Sumitomo Rubber Industries, Ltd.; 26,000 in contracts) 25,000 (7,000) Machinery and machine guarding 1 worker, at this facility which produces tires, died when he placed fabric on a rotating cylinder, got caught in the machine, and asphyxiated after being wound up inside the fabric. Duro Bag Manufacturing Co. (118,000 in contracts) 38,000 (20,000) Dynalectric (Emcor Group, Inc.; 3,968,000 in contracts) 22,500 (0) Dyncorp-Fort Belvoir Division (Dyncorp; 672,931,000 in contracts) 20,250 (10,125) E.I. DuPont de Nemours & Co. (38,484,000 in contracts) 44,700 (8,400) E.T. Lafore, Inc. (7,978,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 47,100 (30,000) Eastern Trans-Waste of Maryland (2,718,000 in contracts) 12,000 (3,000) 16,575 (8,050) Eltech Systems Corp., Electrode (Eltech Systems Corp.; 223,000 in contracts) 25,650 (13,230) Emco, Inc. (Mid-South Industries, Inc.; 5,666,000 in contracts) 33,375 (30,000) Empire Kosher Poultry, Inc. (75,000 in contracts) 25,000 (12,500) Ethicon, Inc. (Johnson & Johnson; 9,658,000 in contracts) 54,150 (29,775) Lockout/tagout; medical and first aid; machinery and machine guarding; bloodborne pathogens; hazard communication standard 23,000 (13,500) 1 worker died from electric shock while checking fuses for this facility which manufactures storage batteries. Exide Electronics Corp. (Exide Electronics Group, Inc.; 68,866,000 in contracts) 56,000 (56,000) 1 worker was hospitalized, at this company which produces transformers, due to electric shock while cleaning consoles with liquid cleaners. The consoles were not disconnected from the power supply. Exxon Oil Co. (Exxon Corp.; 532,123,000 in contracts) 15,300 (7,550) F & B Manufacturing Co. (127,000 in contracts) 52,000 (14,200) Federal Paper Board Co. (176,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 34,500 (17,250) 147,000 (7,500) Special industries; standard of state-operated program 1 worker died from electric shock, at this pulp and paper mill, when a boiler precipitator within the power plant was not deenergized before he entered a confined space to work on it. Fletcher Pacific Construction (Fletcher Challenge, Ltd.; 29,300,000 in contracts) 74,600 (0) F.M.C. Corp., Wellhead Equipment Division (F.M.C. Corp.; 494,377,000 in contracts) 24,225 (11,750) Ford Motor Co. (44,130,000 in contracts) The Foxboro Co. (Siebe PLC; 21,094,000 in contracts) 60,000 (60,000) Occupational health and environmental control; hazard communication standard 1 worker died when splashed by hydrogen fluoride while he was manually dispensing the chemical from the bottom of drum. This company produces measuring and controlling devices. Frito-Lay, Inc. (Pepsico, Inc.; 18,720,000 in contracts) 20,400 (10,200) 1 worker was burned while using a high-pressure steam hot water hose while cleaning the potato peeler equipment at this food preparation facility. 21,500 (11,000) Walking-working surfaces 1 worker died, at this facility which produces snack foods, when his neck was crushed while making adjustments to the waste conveyor system. He was working alone at this wastewater treatment plant. 19,200 (0) Fru-Con (Bilfinger & Berger; 18,001,000 in contracts) 90,500 (42,000) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Fruehauf Trailer Corp. (3,336,000 in contracts) 58,850 (18,950) Fruit of the Loom, Inc. (414,000 in contracts) 15,375 (6,150) Gary’s Grading and Pipeline Co. (160,000 in contracts) 28,350 (13,000) 1 worker was injured when a wall of an unshored trench collapsed. He was trying to install a saddle tap for this pipeline and grading company. 15,000 (5,000) General Electric Co. (8,710,060,000 in contracts) 42,500 (13,125) General Motors Corp. (also owns Delco Electronics; 2,386,810,000 in contracts) 27,700 (15,000) 30,000 (7,500) 133,500 (66,400) 15,000 (6,250) Georgia-Pacific Corp. (2,796,000 in contracts) 19,000 (12,664) 45,000 (22,331) 15,300 (10,125) Fire protection; special industries; electrical 16,125 (8,065) 32,000 (19,500) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Gold Kist, Inc. By Products (Gold Kist, Inc.; 27,202,000 in contracts) 16,100 (10,600) Goodyear Tire/Rubber Co. (Shell Co.; 48,462,000 in contracts) 22,950 (6,026) Goulds Pumps, Inc. (154,000 in contracts) 45,000 (27,000) Granite Construction Co. (33,293,000 in contracts) 26,550 (6,000) Electrical; general safety and health provisions 1 worker died when a reinforced concrete panel fell on him while he was unloading a semitruck transporting these panels to a highway construction site. Great Lakes Dredge & Dock Co. (Blackstone Dredging Partners; 63,949,000 in contracts) 18,900 (9,450) Great Plains Coca Cola Bottling Co. (945,000 in contracts) 17,250 (2,700) Grove North American, Division of Kidde Industries, Inc. (Hanson PLC; 25,444,000 in contracts) 16,575 (11,120) The Gunver Manufacturing Co. (5,077,000 in contracts) 15,050 (15,050) Handy & Harman (1,415,000 in contracts) 18,750 (9,375) Lockout/tagout; machinery and machine guarding Hardaway Co., Inc. (Because contract was terminated or modified, net obligations for fiscal year 1994 are 0 or less.) 15,000 (4,000) Harsco Corp., IKG Division (13,338,000 in contracts) 18,000 (11,175) Harvard Industries Hayes Albio (F.E.L. Corp.; 18,958,000 in contracts) 30,000 (30,000) Hawaii Electric Light Co. (Hawaii Electric Industries; 18,599,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 22,500 (9,000) 1 worker died from electric shock when disassembling a test transformer. The safety indicator was inoperable so he did not realize that the transformer was still energized. Hawaii Stevedores, Inc. (85,000 in contracts) 25,000 (15,000) 1 worker was killed when a forklift ran into him as he was directing another driver into position to load and unload goods on a pier for this marine cargo handling company. Heat Transfer Systems, Inc. (52,000 in contracts) 16,250 (6,000) Henkels and McCoy, Inc. (2,752,000 in contracts) 20,000 (9,000) Homer Laughlin China Co. (173,000 in contracts) 17,500 (9,000) Houck Services, Inc. (6,000 in contracts) 17,850 (7,500) 37,500 (18,700) Hussman Corp. (Whitman Corp.; 3,309,000 in contracts) 15,000 (5,600) I.A. Construction Corp. (Colas; 25,795,000 in contracts) 19,350 (7,550) 23,000 (7,500) Means of egress; hazardous materials I.C.I. America (Imperial Americas, which also owns Zeneca Resins; 16,136,000 in contracts) 19,500 (6,925) Idaho Pacific Corp. (32,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 23,100 (11,550) Personal protective equipment; hazard communication standard Indiana Michigan Power (American Electric Power Co.; 206,000 in contracts) 27,500 (10,000) Inland Steel Co. (Inland Steel Industries; 599,000 in contracts) 59,000 (30,725) Standard of state-operated program; hazardous materials; means of egress 2 workers were killed when trapped in a fire which erupted at this coke-making facility. Their supervisor killed himself several days later. International Paper Co. (23,847,000 in contracts) 20,500 (10,000) 37,500 (18,000) 319,620 (319,620) Willful; repeat; serious 782,500 (372,000) 482,000 (240,000) 15,000 (5,000) 1 worker died when he entered a drum to replace a faulty piece of equipment at this wood products facility. The drum, which was not deenergized or locked out, was inadvertently activated and the worker fell 14 feet into the conveyor system. J & J Maintenance, Inc. (19,666,000 in contracts) 15,375 (9,225) Walking-working surfaces J.H. Baxter Facility (J.H. Baxter & Co., a Ltd. California Partnership; 327,000 in contracts) 16,630 (2,510) Joe E. Woods, Inc. (844,000 in contracts) 40,225 (10,000) John Crane, Inc. (T.I. Group PLC; 18,037,000 in contracts) 33,200 (16,100) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Judds Brothers Construction Co. (292,000 in contracts) 84,000 (18,000) Keebler Co. (United Biscuits Holdings PLC; 4,167,000 in contracts) 16,100 (4,640) Standards of state-operated program; lockout/tagout 2 workers fractured a forearm and a finger, respectively, while cleaning conveyors at this facility that makes cookies and crackers. Klosterman Baking Co. (96,000 in contracts) 35,000 (9,000) Kohler Co., Mill Division (936,000 in contracts) 1,404,300 (35,730) Konica Imaging U.S.A., Inc. (Konica Corp.; 7,312,000 in contracts) 53,100 (16,792) Kostmayer Construction Co. (547,000 in contracts) 27,000 (13,500) Construction; occupational health and environmental controls Kraft Food Service, Inc. (Alliant Food Services; 80,005,000 in contracts) 23,350 (12,200) Krueger International (60,694,000 in contracts) 17,500 (6,600) La Gloria Oil & Gas Co. (See Crown Central Petroleum Corp.) 53,250 (20,000) Walking-working surfaces; hazardous materials; personal protective equipment; medical and first aid; materials handling and storage; machinery and machine guarding; electrical 15,000 (3,500) Lady Baltimore Foods, Inc. (38,000 in contracts) 33,300 (11,600) Lakeside Care Center, Unicare (Crownex, Inc.; 2,183,000 in contracts) 25,500 (2,025) Lambda Electronics, Inc. (Unitech, PLC; 1,075,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 26,200 (8,249) Lauhoff Grain Co. (Bunge Corp.; 61,486,000 in contracts) 39,500 (11,750) 1 worker died and another was hospitalized when cleaning a grain bin for this grain mill products company. Both workers were drawn down into the grain bin, and the first suffocated. Lockheed (Lockheed-Martin Corp.; 7,043,395,000 in contracts) (1,495,560) Violations were changed to unclassified by an administrative law judge’s decision. 30,000 (22,500) 21,000 (15,750) Lufkin Industries, Inc. (5,724,000 in contracts) 15,750 (7,475) M & K Electrical Co., Inc. (3,000 in contracts) 21,000 (11,000) Electrical; general safety and health provisions; power transmission and distribution 1 worker died from electric shock while removing a compactor from between two energized conductors and inadvertantly coming into contact with an energized line. M.R. Dillard Construction Co. (1,673,000 in contracts) 64,800 (12,000) Marine Hydraulics International (Marine Hydraulics, Inc.; 16,018,000 in contracts) 20,000 (10,140) Marley Cooling Tower Co., Inc. (United Dominion Industries, Ltd.; 1,907,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 21,000 (5,440) Marriott Corp. (Host Marriott Corp.; 2,128,000 in contracts) 24,000 (12,000) Mason Technologies, Inc. (The Mason Co.-Del; 282,424,000 in contracts) 19,125 (9,562.50) Medical Laboratory Automation (36,000 in contracts) 16,950 (11,865) Medline Industries, Inc. (1,190,000 in contracts) 27,675 (15,000) Meinecke-Johnson Co. (6,975,000 in contracts) 21,500 (10,750) Metric Constructors (Philipp Holzman AG; 36,452,000 in contracts) 20,800 (9,200) Misener Marine Construction, Inc. (Interbain; 9,460,000 in contracts) 25,550 (7,200) Montgomery Elevator (Kone Holding, Inc.; 5,930,000 in contracts) 55,000 (14,500) 18,000 (10,000) Moon Engineering Co., Inc. (7,281,000 in contracts) 20,300 (10,150) Morrison-Knudsen Corp., Inc. (221,024,000 in contracts) 70,000 (175,000) Mosler, Inc. (Kelso Investment Assoc. IV LP; 1,465,000 in contracts) 37,000 (21,000) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 33,600 (20,285) National Beef Packing Co. LP (15,177,000 in contracts) 908,600 (483,500) National Fruit Produce Co., Inc. (535,000 in contracts) 104,500 (49,125) National Health Laboratories (National Health Labs Holdings; 794,000 in contracts) 123,000 (75,000) Neosho Construction (Neosho, Inc.; 6,061,000 in contracts) 80,100 (9,500) 1 worker was hospitalized for head injuries when he fell 10 feet onto a concrete floor while working on reinforcing a railroad undercrossing. New York Telephone Co. (NYNEX Corp.; 5,822,000 in contracts) 16,995 (3,000) Northern Indiana Pacific Service (NIPSCO Industries, Inc.; 770,000 in contracts) 22,000 (14,250) Northwest Enviro Service, Inc. (6,803,000 in contracts) 22,275 (10,000) Novinger Group, Inc. (58,000 in contracts) 17,800 (9,000) 1 worker died of electric shock when, for this plastering and drywall company, he mistakenly cut into electric wiring. 33,750 (11,250) Packaging Corp. of America (Tenneco Packaging, Inc.; 504,686,000 in contracts) 16,500 (5,000) 15,000 (4,700) P.C.L.- Harbert, Joint Venture (P.C.L. Enterprises; 216,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 32,500 (12,310) Peace Industries, Ltd. (326,000 in contracts) 15,750 (11,500) Pennsylvania Power & Light Co. (Pennsylvania Power & Light Resources, Inc.; 4,863,000 in contracts) 21,000 (21,000) General duty clause; power transmission and distribution 1 worker died of electric shock when installing underground electrical conductors in a new development. He attempted to connect a line he mistakenly thought was deenergized. Penrose Hospital (Sisters of Charity Health Care; 232,000 in contracts) 51,750 (38,813) 94,000 (31,500) Piquniq Management Corp. (36,597,000 in contracts) 78,750 (33,750) Pizzagalli Construction, Inc. (Because contract was terminated or modified, net obligations for fiscal year 1994 are 0 or less.) 21,675 (9,500) PMX Industries, Inc. (13,268,000 in contracts) 40,000 (10,700) 6 workers were hospitalized from smoke inhalation as a result of fighting a fire. Hydraulic oil caught fire at this metal smelting and refining plant. Professional Ambulance Service (American Medical Response; 712,000 in contracts) 15,750 (15,750) P.S.I. Energy-Gibson Generating (Cinergy Corp.; 4,650,000 in contracts) 15,000 (5,620) Standard of state-operated program; personal protective equipment 2 workers were hospitalized due to burns. 20 workers were injured, although not hospitalized, as a result of smoke inhalation and cuts and bruises from falling debris. These workers were trying to fight the fire from a coal hopper explosion at this electrical services facility. (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Public Service Co. of Colorado (17,015,000 in contracts) 37,850 (28,000) Pulse Electronics, Inc. (149,000 in contracts) 16,575 (6,630) Purina Mills, Inc. (P.M. Holdings Corp.; 99,000 in contracts) 18,000 (12,000) Walking-working surfaces; 35,000 (5,000) 1 worker died when he got caught in a bag-stacker machine while trying to free a jammed pallet without turning off the power. He inadvertantly hit a switch, causing the machine to recycle at this animal feed manufacturing facility. 22,950 (13,162.50) Walking-working surfaces; Radiation Systems, Inc.-Univer (Comsat Corp. RSI; 40,787,000 in contracts) 23,000 (11,500) Cranes, derricks, hoists, elevators, and conveyors 1 worked died when he fell 120 feet from a platform that hit an object and tipped to the side as it was being lowered. This worker and 3 others on the platform were not tied off. This company is a special trade contractor in the construction industry. Ralston Purina Co. (7,388,000 in contracts) 49,050 (8,700) Redondo Construction Corp. (8,799,000 in contracts) 18,275 (7,310) Reed & Reed, Inc. (1,359,000 in contracts) 28,000 (4,000) Rehrig International, Inc. (28,000 in contracts) 22,550 (9,020) Rensselaer Polytechnic Institute (5,656,000 in contracts) 62,500 (8,000) Reynolds & Reynolds Co. (1,402,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 19,800 (12,000) Rhone Poulenc Basic Chemical (Rhone-Poulenc, Inc.; 10,693,000 in contracts) 195,165 (57,485) 365,875 (64,250) 1 worker died and another was hospitalized due to chemical burns when they mistakenly extracted a valve, releasing 80,000 gallons of acid sludge from a storage tank, at this industrial chemicals facility. Rich Industries, Inc. (90,000 in contracts) 31,500 (12,800) 1 worker died from electric shock when he reached into a press to do maintenance work and came into contact with a live electrical part. This facility manufactures protective clothing for the nuclear industry. Richard F. Kline, Inc. (24,000 in contracts) 51,775 (4,100) R.M.I. Co. (R.M.I. Titanium Co.; 7,577,000 in contracts) Roadway Express, Inc. (1,900,000 in contracts) 17,425 (7,600) 32,850 (9,900) The Roof Doctor, Inc. (Because contract was terminated or modified, net obligations for fiscal year 1994 are 0 or less.) 23,290 (8,290) Rosenburg Forest Products (446,000 in contracts) 75,000 (10,000) Roto-Rooter Services Co. (Roto-Rooter, Inc.; 1,000 in contracts) 30,250 (4,525) Salvation Army (5,714,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 28,800 (2,880) Means of egress; fire protection; machinery and machine guarding; electrical 22,500 (1,000) Schuck and Sons Construction Co., Inc. (49,000 in contracts) 56,125 (1,075) 1 worker was hospitalized when he fell while working on a frame house for this company that builds residential buildings. The worker was leaning out from a 9-foot height while attempting to cut a roof joist when he slipped and fell to the cement porch below. Sciaba Construction Corp. (267,000 in contracts) 18,200 (7,280) Scott Paper Co. (Kimberly-Clark; 2,875,000 in contracts) 36,750 (27,575) Sears (Sears Roebuck & Co.; 10,497,000 in contracts) 67,000 (58,600) 16,500 (4,900) 23,500 (7,000) 36,900 (15,500) Occupational health and environmental control Sermetech International, Inc. (Teleflex, Inc.; 11,529,000 in contracts) 18,750 (8,437.50) Shasta Industries, Inc. (79,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 71,000 (29,500) Electrical; occupational health and environmental controls 1 worker died from burns when trying to use acetone to remove standing water in a swimming pool for which he was preparing a fiberglass interior surface. The acetone vapors in the pool were ignited when he switched on a vacuum. The company is a special trade contractor. Shelby Williams Industries, Inc. (401,000 in contracts) 60,000 (9,200) 44,675 (10,000) Process safety management; personal protective equipment 1 worker died and 2 were hospitalized from exposure to gas when one of them opened the flange of a pipeline while they were doing maintenance work at this petroleum refining facility. 155,000 (155,000) Shirley Contracting Corp. (3,989,000 in contracts) 21,000 (8,000) Siemens Energy & Automation (Siemens; 47,791,000 in contracts) 60,000 (21,500) Signature Flight Support Corp. (14,535,000 in contracts) 18,500 (10,200) 1 worker died when inflating a tire on a baggage trailer that transports luggage to and from the aircraft. The tube exploded and the rim struck the employee in the face, causing massive head injuries. The company provides airport terminal services. Smith & Nephew Dyonics (Smith & Nephew PLC; 589,000 in contracts) 15,375 (7,688) Smith & Wesson Co. (Tompkins Industries; 3,817,000 in contracts) 22,750 (11,375) Machinery and machine guarding; electrical The Smithfield Packing Co. (Smithfield Foods, Inc.; 2,975,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 22,500 (7,800) Snyder General Corp. (McQuay International; 557,000 in contracts) 19,975 (11,225) Spearin Preston & Burrows, Inc. (51,000 in contracts) 17,500 (2,500) S.S.I. Food Services, Inc. (Simplot J.R. Co.; 26,736,000 in contracts) 107,000 (43,000) Stambaugh’s Air Service, Inc. (12,883,000 in contracts) 18,000 (12,900) 1 worker died and another was hospitalized when trying to remove an engine from an aircraft. The 4,000-pound engine dropped on the chest of the first worker when the front chain of the mechanism used to remove the engine broke. The other worker was struck in the head by the mechanism itself. Stevedoring (Cooper/T Smith Stevedoring, Inc.; 10,299,000 in contracts) 18,000 (9,000) 16,900 (8,450) Stone Container Corp. (3,214,000 in contracts) 65,500 (60,000) 75,000 (41,500) 45,000 (30,000) Walking-working surfaces; 28,375 (9,350) 40,000 (3,000) Stonhard Maufacturing Co., Inc. (R.P.M., Inc.; 473,000 in contracts) 17,625 (9,300) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Sun Chemical Corp. (Dainippon Ink & Chemicals, Inc.; 552,000 in contracts) 22,500 (7,000) Walking-working surfaces 15,500 (7,000) Walking-working surfaces Supreme Corp. (Supreme Industries, Inc.; 58,000 in contracts) 39,700 (13,850) Swiftships Freeport, Inc. (Swiftships, Inc.; 2,757,000 in contracts) 18,600 (1,500) 1 worker died instantly when he was struck in the head by a 3-ton exhaust stack that was being positioned by a crane for sandblasting and painting, after being removed from a vessel. This facility is engaged in shipbuilding and repair. Texaco Refining (Texaco, Inc.; 21,559,000 in contracts) 83,500 (83,500) 10 workers were hospitalized for smoke inhalation and being struck by falling debris when a piping failure led to a petroleum explosion and fire at this petroleum refining facility. Tower Construction Co., Inc. (5,022,000 in contracts) 24,000 (5,250) Trataros Construction Co. (9,539,000 in contracts) 17,625 (11,500) Trident Seafoods Corp. (880,000 in contracts) 30,150 (13,050) 16,500 (7,250) Trinity Industries, Inc. (109,805,000 in contracts) 15,000 (4,000) 16,500 (9,400) Union Camp Corp. (206,000 in contracts) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 86,250 (35,837.50) Hazardous materials; machinery and machine guarding; electrical 20,280 (14,490) Union Pacific Railroad (Union Pacific Corp.; because contract was terminated or modified, net obligations for fiscal year 1994 are 0 or less.) 15,750 (4,650) United Airlines (U.A.L. Corp.; 2,366,000 in contracts) 27,500 (5,900) Hazard communication standard; fire protection 39,950 (10,125) 95,000 (6,500) Occupational health and environmental controls United Parcel Service (United Parcel Service Amer., Inc.; 5,699,000 in contracts) 22,500 (19,000) 60,000 (60,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 30,000 (30,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 142,000 (142,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 30,000 (30,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 15,000 (15,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 17,500 (9,975) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 90,000 (90,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 94,025 (92,500) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 30,975 (30,975) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 60,000 (60,000) 90,000 (90,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 165,000 (165,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 165,000 (165,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 15,000 (15,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 15,000 (15,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 60,000 (60,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 30,000 (30,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 18,300 (2,000) Means of egress; personal protective equipment 2 workers were hospitalized from exposure to hazardous solvents that leaked from packages within the confined space of an airplane cargo hold. 60,000 (60,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 60,000 (60,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 30,000 (30,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 141,000 (141,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. 15,000 (15,000) Corporatewide settlement agreement regarding emergency response to hazardous conditions when packages are damaged. United Technologies Automotive (United Technologies Corp.; 2,776,447,000 in contracts) 41,000 (16,000) 34,200 (4,000) Toxic and hazardous substances; hazard communication standard Universal Maritime Service Corp. (Maersk, Inc.; 182,088,000 in contracts) 18,700 (4,500) University of Miami (10,020,000 in contracts) 17,550 (7,200) Valley Design and Construction (266,000 in contracts) 17,150 (8,575) Vickers, Inc. (Trinova Corp.; 17,831,000 in contracts) 28,500 (15,500) (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 24,000 (15,000) Victory Corrugated Container Corp. (82,000 in contracts) 29,575 (16,000) Vineland Kosher Poultry, Inc. (349,000 in contracts) Vishay Intertechnology, Inc. (47,000 in contracts) 43,900 (12,700) Volunteers of America of Oklahoma (Volunteers of America, Inc.; 3,416,000 in contracts) 15,000 (5,000) Wabash Valley Manufacturing, Inc. (63,000 in contracts) 21,000 (4,900) Waste Management Disposal (WMX Technologies, Inc.; 241,696,000 in contracts) 63,000 (9,000) Weber Aircraft, Inc. (Zodiac, SA; 13,300,000 in contracts) 28,500 (21,225) Weight Watchers Food Co. (Heinz, Inc.; 439,000 in contracts) 66,000 (42,000) West State, Inc. (W.S., Inc.; 1,310,000 in contracts) 15,000 (2,500) Westinghouse Electric Corp. (4,595,090,000 in contracts) 21,925 (18,300) Whirlpool Corp. (2,351,000 in contracts) 52,500 (26,250) Machinery and machine guarding 1 worker was hospitalized, and his hand and forearm amputated, when he got caught while manually feeding coil through a mechanical power press. The facility manufactures household refrigerators. (continued) Worksite (name of federal contractor if different; total contract dollars awarded) Proposed penalty (actual penalty) Description of fatality or injury 19,000 (5,000) Willamette Industries, Inc. (1,860,000 in contracts) 17,500 (6,000) Standard of state-operated program; walking-working surfaces 1 worker died when an object, which fell from the wall of a large vessel he was cleaning along with several other workers, crushed this worker. The facility manufactures hardwood veneer or plywood. 29,025 (19,350) 15,000 (15,000) Yuasa-Exide, Inc. (1,583,000 in contracts) Zeneca Resins (Imperial Americas; see I.C.I. America) 17,550 (8,775) Means of egress; hazardous material; fire protection 1 worker was hospitalized from inhaling vapors released due to improper storage of chemicals at this facility that manufactures plastics and synthetic resins. Although all workers were evacuated, this worker went to search for a co-worker without using personal protective equipment. (Table notes on next page) *Assessed proposed penalty of $100,000 or more for safety and health violations. =Inspection conducted by a state-operated safety and health program. Table III.1 categorizes the 261 federal contractors assessed significant proposed penalties by the OSHA standard violated. Our definition of a significant penalty is a proposed penalty of $15,000 or more regardless of the size of the actual penalty recorded when the inspection was closed (either because the employer accepted the citation or a contested citation was resolved). The proposed penalty is the penalty issued by OSHA in the original citation and reflects the compliance officer’s judgment of the nature and severity of violations, while the actual penalty may be the product of other factors such as negotiations between OSHA and the company to encourage quicker abatement of workplace hazards. Because some of these 261 federal contractors own more than one worksite inspected, a total of 345 inspections appear in the table. The name of the federal contractor (or parent company) is identified if it is different from the name of the worksites where the violations occurred. The table also includes the location of the worksite inspected, including the corresponding activity number of the inspection as assigned in IMIS. Given that there are many different OSHA standards, we reported those standards in which the greatest number of violations in the 345 inspections fell. Because more violations were of general industry standards, we reported these standards in greater detail. We have identified those 26 inspections in which a proposed penalty of $100,000 or more was assessed for safety and health violations with an asterisk that appears by the activity number of the inspection. Seventy-one inspections conducted by state-operated safety and health programs are identified with a special symbol (=) by the activity number of the inspection. The column of “All other standards” is often marked in inspections conducted by state-operated programs because the codes used by some states are different from the codes for federal standards. Worksite (name of federal contractor if different) Location (IMIS activity number) New York, NY (106934086) A.A.R. Engine Component Service (A.A.R. Corp.) Frankfort, NY (018154542) A.B.B. Combustion Engineering Nuclear (A.B.B. A.S.E.A. Brown Boveri, Ltd.) Newington, NH (108781816) Acme Steel Co. (Acme Metals, Inc.) Chicago, IL (103451274) Alamo Transformer Supply Co. Houston, TX (107489593) Albany International Corp. East Greenbush, NY (109053272) Alcan Toyo America (Toyo Aluminum KK) Lockport, IL (108719063) Alder Construction Co. Boise, ID (107232167) All American Poly Corp. Dunellen, NJ (114039639) All-Steel, Inc. (B.T.R. PLC) Montgomery, IL (102997434) West Hazleton, PA (018226225) Allied Tube and Conduit (Tyco International, Ltd.) Philadelphia, PA (017999095) Philadelphia, PA (018253054)* Harvey, IL (103453387) Rockdale, TX (123431298) Massena, NY (106991326) Amcor, Inc. (C.R.H. PLC) Nampa, ID (110517984) Amoco Gas Co. (Amoco Corp.) Texas City, TX (107491433) The Arbors at Fairmont (Arbor Health Care Co.) Fairmont, WV (101176626) Arco Alaska, Inc. (Atlantic Richfield Co.) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Prudhoe Bay, AK (105867964)= Asplundh Tree Expert Co. Columbia, MD (119539898)= AT&T Communications (AT&T) Danforth, ME (109797910) Avondale Industries, Inc. Westwego, LA (110344983) B.R. Group, Inc. Baldt, Inc. Chester, PA (102842192) Ball Corp. Columbus, OH (103343000) Basler Electric Co. Corning, AR (107705931) Bath Iron Works Corp. (Fulcrum II Limited Partnership) Bath, ME (101450336)* Batson-Cook Co. Tampa, FL (109609776) Baxter Health Care Corp. (Baxter International, Inc.) Carolina, PR (119461473)= Bell Helicopter Textron, Inc. (Textron, Inc.) Hurst, TX (103375663) Bender Shipbuilding & Repair Co. Mobile, AL (107011207) Berning Construction, Inc. Detroit, OR (123776262)= Bethlehem Steel Corp. Sparrows Point, MD (104383815)= Sparrows Point, MD (119517068)= Biocraft Laboratories, Inc. Paterson, NJ (109043141) Fairfield, NJ (101484780) Bizzack, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Logan, WV (116242512) Blaze Construction Co. Many Farms, AZ (002331486) Pinon, AZ (002331478) Blue Bells Creameries USA, Inc. Brenham, TX (123419905) Boeing (The Boeing Co.) Commerical Aircraft Co. Everett, WA (115506081)= Defense and Space Group Ridley Park, PA (018253047) Boise Cascade Corp. Horseshoe Bend, ID (110502895) Rumford, ME (102753969)* Rumford, ME (103392247)* Rumford, ME (109793901) Boston University (of Boston University Trustees) Boston, MA (109124131) Bowman Apple Products Co., Inc. Mt. Jackson, VA (105754790)= Brown & Root (Halliburton Co.) Deer Park, TX (123652505) Browning-Ferris Industries, Inc. Corpus Christi, TX (103579934) Burns & Roe Services Corp. (Burns & Roe Enterprises, Inc.) Greenport, NY (108664475) Burron Medical, Inc. (B. Braun Melsungen A.G.) Allentown, PA (123264145) C.H. Heist Corp. Oregon, OH (110294584) Campbell Soup Co. Tecumseh, NE (109323105) Cargill, Inc. (Tyson Foods, Inc.) Buena Vista, GA (106514169) Center Core, Inc. (CenterCore Group) Plainfield, NJ (113942155) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Centric Jones Construction (Centric Jones Co.) Aurora, CO (100748813) Century Concrete Services, Inc. Richmond, VA (123658890)= Certified Coatings (Certified Coating of Cal) Ogden, UT (124620931)= Chevron USA (Chevron Corp.) Port Arthur, TX (123653255) Children’s Hospital Medical Center Cincinnati, OH (102592094) Chomerics, Inc. (Parker Hannifin Corp.) Hudson, NH (108781717) Chrysler Motors Corp., K (Chrysler Corp.) Kenosha, WI (102347218)* Cincinnati Milacron Resin Abrasion (Cincinnati Milacron, Inc.) Carlisle, PA (109025502) Clean Harbors of Kingston, Inc. (Clean Harbors Environmental Services, Inc.) Providence, RI (017945213)* Cincinnati, OH (103127585) Colgate-Palmolive Co. Kansas City, KS (113820021) ConAgra, Inc. Broiler Co. Enterprise, AL (109246249) Fresh Meats Co. Omaha, NE (109318873) Consolidated Edison Co. of New York New York, NY (107197816) Consolidated Grain and Barge Co. (C.G.B. Enterprises, Inc.) Mount Vernon, IN (107139784) Cornell University Press (Cornell University) Ithaca, NY (113937304) Coyne Textile Services (Coyne International Enterprises Corp.) New Bedford, MA (109124958) Crane & Co., Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Pittsfield, MA (017830456) Croman Corp. Lumber Boise, ID (018168146) Crowley Maritime Corp. American Transport, Inc. San Juan, PR (106716145) Maritime Corp. Seattle, WA (109421685) Crown American (Crown Holding Co.) Scranton, PA (017623174) Crown Central Petroleum Corp. Pasadena, TX (123653081) D.J. Manufacturing Corp. Dana Corp. Spicer Axle Div. Fort Wayne, IN (115017410)= Chasis Prod. Oklahoma City, OK (108736869) Delco Electronics (General Motors Corp.) Oak Creek, WI (103472049) Dell Computer Corp. Austin, TX (123549917) Austin, TX (123579559) Detroit Diesel Corp. (Penske Corp.) Detroit, MI (114811748)= Diamond Shamrock Refining & Marketing (Diamond Shamrock, Inc.) Colorado Springs, CO (109549055) Dick Enterprises, Inc. Shamokin, PA (018227009) Domermuth Petroleum Equipment & Maintenance (J. Myles Group, Inc.) East Syracuse, NY (100162056) Donohoe Construction Companies (Donohoe Companies, Inc.) Rockville, MD (119535847)= Dreadnought Marine, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Norfolk, VA (123673253)= Duncan-Smith, Inc. Charleston, SC (017419631) Dunlop Tire Corp. (Sumitomo Rubber Industries, Ltd.) Huntsville, AL (108955618) Duro Bag Manufacturing Co. Walton, KY (124595901)= Dynalectric (Emcor Group, Inc.) Perryville, MD (102480233) Dyncorp-Fort Belvoir Division (Dyncorp) Fort Belvoir, VA (017968827) E.I. DuPont de Nemours & Co. Niagara Falls, NY (017816026) E.T. Lafore, Inc. Denver, CO (100744580) Washington, DC (117940098) Eaton Corp. Marion, OH (106127541) Eltech Systems Corp, Electrode (Eltech Systems Corp.) Chardon, OH (103544557) Emco, Inc. (Mid-South Industries, Inc.) Gadsden, AL (109192997) Empire Kosher Poultry, Inc. Mifflintown, PA (102699568) Ethicon, Inc. (Johnson & Johnson) San Angelo, TX (123542706) Exide Corp. Salina, KS (103163317) Exide Electronics Corp. (Exide Electronics Group, Inc.) Raleigh, NC (111091807)= Exxon Oil Co. (Exxon Corp.) Baytown, TX (109459339) F & B Manufacturing Co. Gurnee, IL (102987740) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Federal Paper Board Co., Inc. Riegelwood, NC (018518670)= Riegelwood, NC (018518688)*= Fletcher Pacific Construction (Fletcher Challenge, Ltd.) Honolulu, HI (120659362)= F.M.C. Corp., Wellhead Equipment D (F.M.C. Corp.) Houston, TX (123553224) Ford Motor Co. Hazelwood, MO (106547508)* Lorain, OH (106123748) The Foxboro Co. (Siebe PLC) Foxboro, MA (107541567) Frito-Lay, Inc. (Pepisco, Inc.) Dayville, CT (109826248) Allen Park, MI (110801305)= Granite City, IL (103278982) Fru-Con (Bilfinger & Berger) Grant Town, WV (100595354) Fruehauf Trailer Corp. St. Louis, MO (116102088) Fruit of the Loom, Inc. Lexington, SC (120477351)= Gary’s Grading and Pipeline Co. Lawrenceville, GA (106514367) Gayston Corp. Springboro, OH (103385290) General Electric Co. Springfield, MO (110466034) General Motors Corp. BOC Lordstown Lordstown, OH (103217881) BOC Lordstown Lordstown, OH (108836552) Trucks Moraine, OH (103376422)* CPC Group Oklahoma City, OK (108743253) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Georgia-Pacific Corp. Brunswick, GA (109006700) Brunswick, GA (109006981) Palatka, FL (110133816) Mount Wolf, PA (109029520) Cedar Springs, GA (106213911) X Gold Kist, Inc. By Products (Gold Kist, Inc.) Ball Ground, GA (106514383) Goodyear Tire/Rubber Co. (Shell Co.) Apple Grove, WV (100781483) Goulds Pumps, Inc. Slurry Pump Ashland, PA (106464829) Granite Construction Co. Rockwall, TX (103556791) Great Lakes Dredge & Dock Co. (Blackstone Dredging Partners) Baltimore, MD (102480217) Great Plains Coca Cola Bottling Co. Oklahoma City, OK (108740200) Grove North American, Division of Kidde Industries, Inc. (Hanson PLC) Shady Grove, PA (123177453) The Gunver Manufacturing Co. Manchester, CT (109829119) Attleboro, MA (109130294) Hardaway Co., Inc. St. Petersburg, FL (109607689) Harsco Corp., IKG Division Carlisle, OH (103385464) Harvard Industries Hayes Albio (F.E.L. Corp.) Bryan, OH (122085277) Hawaii Electric Light Co. (Hawaii Electric Industries) Hilo, HI (103885844)= Hawaii Stevedores, Inc. Honolulu, HI (110635059) Heat Transfer Systems, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) St. Louis, MO (106546963) Henkels & McCoy, Inc. Conshohocken, PA (017871906) Homer Laughlin China Co. Newell, WV (101179000) Houck Services, Inc. Harrisburg, PA (123176414) Hunter Corp. Chesterton, IN (124059148)= Hussman Corp. (Whitman Corp.) Bridgeton, MO (106540446) I.A. Construction Corp. (Colas) Philadelphia, PA (102845575) I.B.P., Inc. Waterloo, IA (115062556)= I.C.I. America (Imperial Americas) Tamaqua, PA (106472160) Idaho Pacific Corp. Ririe, ID (107234965) Indiana Michigan Power (American Electric Power Co.) Rockport, IN (123970188)= Inland Steel Co. (Inland Steel Industries) East Chicago, IN (115036386)= International Paper Co. Moss Point, MS (101391787) Natchez, MS (107089484) Cordele, GA (106441108) Jay, ME (018058123)* Moss Point, MS (101390235)* Natchez, MS (102677952)* J & J Maintenance Inc. Norfolk, VA (017704875) J.H. Baxter Facility (J.H. Baxter & Co, a Ltd. California Partnership) Long Beach, CA (112086327)= Joe E. Woods, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) San Carlos, AZ (102317195) John Crane, Inc. (T.I. Group PLC) Morton Grove, IL (102991825) Judds Brothers Construction Co. Ashland, NE (109317917) Keebler Co. (United Biscuits Holdings PLC) Grand Rapids, MI (114801988)= Klosterman Baking Co. Cincinnati, OH (103032751) Kohler Co., Mill Division Kohler, WI (103077707)* Konica Imaging U.S.A., Inc. (Konica Corp.) Glen Cove, NY (113921183) Kostmayer Construction Co. New Orleans, LA (107634032) Kraft Food Service, Inc. (Alliant Food Services) Englewood, CO (109547000) Green Bay, WI (103520318) La Gloria Oil & Gas Co. (Crown Central Petroleum Corp.) Tyler, TX (107555567) Tyler, TX (103564449) Lady Baltimore Foods, Inc. Kansas City, KS (113821532) Lakeside Care Center, Unicare (Crownex, Inc.) Lubbock, TX (107410565) Lambda Electronics, Inc. (Unitech, PLC) McAllen, TX (107431975) Lauhoff Grain Co. (Bunge Corp.) Danville, IL (103304135) Lockheed (Lockheed-Martin Corp.) Aeronautical Systems (001874445)* Engineering & Science (123652711) Longmont Foods (ConAgra, Inc.) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Longmont, CO (100747476) Lufkin Industries, Inc. Lufkin, TX (123565210) M & K Electrical Co., Inc. Pittsburgh, PA (108755588) M.R. Dillard Construction Co. Loretto, TN (114512635)= Marine Hydraulics International (Marine Hydraulics, Inc.) Norfolk, VA (102899580) Marley Cooling Tower Co., Inc. (United Dominion Industries, Ltd.) Needville, TX (123650103) Marriott Corp. (Host Marriott Corp.) Troy, OH (103275814) Mason Technologies, Inc. (The Mason Co.-Del) Ceiba, PR (106716202) Pleasantville, NY (110603289) Medline Industries, Inc. Mundelein, IL (103594396) Meinecke-Johnson Co. Fargo, ND (107119075) Metric Constructors (Philipp Holzman A.G.) Estill, SC (018112284) Misener Marine Construction, Inc. (Interbain) Ft. Myers, FL (109711606) Montgomery Elevator (Kone Holding, Inc.) Winfield, KS (103164935) Tampa, FL (106491350) Moon Engineering Co., Inc. Portsmouth, VA (102899499) Morrison-Knudsen Corp., Inc. Yonkers, NY (017651407) Mosler, Inc. (Kelso Investment Assoc. IV LP) Hamilton, OH (103275830) M.S.E. Corp. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Indianapolis, IN (115006017)= National Beef Packing Co. LP Liberal, KS (106629884)* National Fruit Produce Co., Inc. Winchester, VA (112376587)*= National Health Laboratories (National Health Labs Holdings) Uniondale, NY (107355133)* Neosho Construction (Neosho, Inc.) Riverside, CA (119959757)= New York Telephone Co. (NYNEX Corp.) New York, NY (108946708) Northern Indiana Public Service (NIPSCO Industries, Inc.) South Bend, IN (115002420)= Northwest Enviro Service, Inc. Seattle, WA (111284170)= Harrisburg, PA (109018937) Olin Corp. East Alton, IL (103279196) Packaging Corp. of America (Tenneco Packaging, Inc.) Griffith, IN (124068792)= Tama, IA (115064248)= P.C.L.-Harbert, Joint Venture (P.C.L. Enterprises) Denver, CO (100748110) Peace Industries, Ltd. Rolling Meadows, IL (103592515) Pennsylvania Power & Light Co. (Pennsylvania Power & Light Resources, Inc.) Williamsport, PA (109361659) Penrose Hospital (Sisters of Charity Health Care) Colorado Springs, CO (109544643) Perini Corp. New York, NY (106183445) Piquniq Management Corp. Kodiak, AK (108542259)= Pizzagalli Construction, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Hanover, NH (100856921) PMX Industries, Inc. Cedar Rapids, IA (115054066)= Professional Ambulance Service (American Medical Response) Atlantic City, NJ (113960538) P.S.I. Energy-Gibson Generating (Cinergy Corp.) Owensville, IN (108563958)= Public Service Co. of Colorado Pueblo, CO (110534286) Pulse Electronics, Inc. Rockville, MD (119588481)= Purina Mills, Inc. (P.M. Holdings Corp.) Macon, GA (106513559) Liberal, KS (103164372) Oklahoma City, OK (108742081) Radiation Systems, Inc.-Univer (Comsat Corp. RSI) Green Bank, WV (101174506) Ralston Purina Co. Clinton, IA (115066870)= Redondo Construction Corp. Mayaguez, PR (119487999)= Reed & Reed, Inc. Saint Francis, ME (102748233) Rehrig International, Inc. Richmond, VA (123656555)= Troy, NY (108655804) Reynolds & Reynolds Co. Edison, NJ (002119352) Rhone Poulenc Basic Chemical (Rhone-Poulenc, Inc.) Martinez, CA (111995379)*= Martinez, CA (111996526)*= Rich Industries, Inc. New Philadelphia, OH (103040234) Richard F. Kline, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) R.M.I. Co. Niles, OH (105924922) Roadway Express, Inc. Chicago Heights, IL (101313252) Oakville, CT (109828079) The Roof Doctor, Inc. Olympia, WA (111459855)= Weed, CA (111909560)= Roto-Rooter Services Co. (Roto-Rooter, Inc.) Baltimore, MD (119559649)= Rockford, IL (122098684) Rockford, IL (122108004) Schuck and Sons Construction Co., Inc. Indio, CA (112057690)= Sciaba Construction Corp. Shelburne Falls, MA (017826439) Scott Paper Co. (Kimberly-Clark) Chester, PA (102845120) Sears (Sears Roebuck & Co.) Auto Center Toledo, OH (110274198) Automotive Center Toms River, NJ (108665050) Roebuck & Co. Iowa City, IA (115054561)= Roebuck & Co. Automotive Springfield, MA (017828617) Sermetech International, Inc. (Teleflex, Inc.) Sugar Land, TX (123652174) Shasta Industries, Inc. Phoenix, AZ (115562290)= Shelby Williams Industries, Inc. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Shell Oil Co. Deer Park, TX (123652513) Wood River Roxana, IL (106552771)* Shirley Contracting Corp. Washington, DC (123503294) Siemens Energy & Automation (Siemens) Urbana, OH (103030086) Signature Flight Support Corp. Chicago, IL (103586947) Smith & Nephew Dyonics (Smith & Nephew PLC) Andover, MA (109622332) Smith & Wesson Co. (Tompkins Industries) Springfield, MA (102766664) The Smithfield Packing Co. (Smithfield Foods, Inc.) Landover, MD (119587681)= Snyder General Corp. (McQuay International) Verona, VA (123702128)= Spearin Preston & Burrows, Inc. New York, NY (017777251) S.S.I. Food Services, Inc. (Simplot J.R. Co.) Wilder, ID (110516986)* Stambaugh’s Air Service, Inc. Middletown, PA (109028738) Stevedoring (Cooper/T Smith Stevedoring, Inc.) Services of America Savannah, GA (106219967) Port Cooper Houston, TX (123653958) Stone Container Corp. Jacksonville, AR (107605776) Jacksonville, AR (110360427) Frenchtown, MT (100568815) Frenchtown, MT (107214314) Columbia, SC (120493994)= Stonhard Manufacturing Co., Inc. (R.P.M., Inc.) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Maple Shade, NJ (106741531) Sun Chemical Corp. (Dainippon Ink & Chemicals, Inc.) Cincinnati, OH (103231833) Cincinnati, OH (103273041) Supreme Corp. (Supreme Industries, Inc.) Goshen, IN (108646167)= Swiftships Freeport, Inc. (Swiftships, Inc.) Freeport, TX (107491011) Texaco Refining (Texaco, Inc.) Los Angeles, CA (112076500)= Tower Construction Co., Inc. Mililani Town, HI (103887865)= Trataros Construction Co. New York, NY (107196248) Trident Seafoods Corp. Naknek, AK (109433052) Naknek, AK (124072521)= Trinity Industries, Inc. Longview, TX (109098921) Unifirst Corp. Springfield, MA (017828252) Union Camp Corp. Fine Paper Division Franklin, VA (112394796)= Savannah, GA (017403627) Union Pacific Railroad (Union Pacific Corp.) Green River, WY (114619042)= United Airlines (U.A.L. Corp.) Elk Grove Village, IL (102992112) Elk Grove Village, IL (103456794) Executive Office Elk Grove Village, IL (102992047) United Parcel Service (United Parcel Service Amer., Inc.) Mobile, AL (106092067) Commerce City, CO (109550491) Fort Collins, CO (100747146) Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Hartford, CT (123214074) Palm Bay, FL (109709279) Pinellas Park, FL (109709311) Earth City, MO (116103722) Jackson, MS (018135012) Manchester, NH (017902925) Twin Mountain, NH (108783929) Buffalo, NY (114098858) East Syracuse, NY (106898208)* Elmsford, NY (109916726)* Uniondale, NY (108664079)* Austin, TX (123432338) Mesquite, TX (107550857) Deerfield, FL (108995697) Miami, FL (110056421)* Linthicum Heights, MD (119554269)= Belton, TX (123426421) Bryan, TX (123424574) Corpus Christi, TX (107433583) Laredo, TX (107434243) San Antonio, TX (123432254) United Technologies Automotive (United Technologies Corp.) Unitog, Inc. Warrensburg, MO (115971475) Universal Maritime Service Corp. (Maersk, Inc.) Port Newark, NJ (017982646) Fort Lauderdale, FL (109689992) X Boise, ID (107234726) Vickers, Inc. (Trinova Corp.) Omaha, NE (109321687) Omaha, NE (109322974) Victory Corrugated Container Corp. Shipyards, marine terminals, longshoring (continued) Worksite (name of federal contractor if different) Location (IMIS activity number) Roselle, NJ (114039951) Vineland Kosher Poultry, Inc. Vineland, NJ (108666413) Vishay Intertechnology, Inc. Malvern, PA (102845518) Volunteers of America of Oklahoma (Volunteers of America, Inc.) Tulsa, OK (109060137) Wabash Valley Manufacturing, Inc. Silver Lake, IN (114974199)= Waste Management Disposal (W.M.X. Technologies, Inc.) Phoenix, AZ (115584815)= Weber Aircraft, Inc. (Zodiac, SA) Gainesville, TX (110372539) Weight Watchers Food Co. (Heinz, Inc.) Wethersfield, CT (102794856) West State, Inc. (W.S., Inc.) Portland, OR (110505344) Westinghouse Electric Corp. Birmingham, AL (106232804) Whirlpool Corp. Fort Smith, AR (110354784) Evansville, IN (123970469)= Willamette Industries, Inc. Witco Corp. Memphis, TN (120549472)= Yuasa-Exide, Inc. San Antonio, TX (123434094) Zeneca Resins (Imperial Americas) Wilmington, MA (109620831) Shipyards, marine terminals, longshoring *Assessed proposed penalty of $100,000 or more for safety and health violations. =Inspection conducted by a state-operated safety and health program. Table IV.1 identifies the 50 federal contractors that were assessed significant proposed penalties in an OSHA inspection in which a fatality or injury occurred. The location of the worksite inspected and the corresponding activity number for the inspection, as assigned in IMIS, are provided. The name of the federal contractor (or parent company) is identified if it is different from the name of the worksite where the violations occurred. In describing the fatality or injury, we referred to investigation summaries submitted by OSHA compliance officers or follow-up calls to area OSHA offices when other data in IMIS indicated an accident had occurred but no summary was available. The accident segment of IMIS provided counts for fatalities and injuries, which we supplemented with information obtained through our follow-up calls. We have reported only those standards violated that are associated with the highest actual penalty as well as standards that reportedly contributed to a fatality or injury when different from the former. Regardless, factors other than a company’s OSHA violations may have contributed to some of these fatalities or injuries, such as misjudgments by the worker or the worker’s failure to follow company safety practices. We have identified those inspections in which a proposed penalty of $100,000 or more was assessed with an asterisk and those inspections conducted by state-operated safety and health programs with a special symbol (=). Table IV.1: Fatalities and Injuries Associated With Inspections Involving 50 Federal Contractors Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 5 workers were hospitalized due to fall when floor of building, which was not shored or braced, collapsed during demolition. 1 worker died, another was hospitalized, from exposure to blast furnace gas due to equipment failure at a steel mill. Means of egress;hazardous materials; personal protective equipment; general environmental controls lockout/tagout;toxic and hazardous substances (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker was hospitalized and died 4 days later after being crushed in a weaving loom at this textile plant. 1 worker died from burns when a mixer containing aluminum powder exploded at this primary metals production plant. General duty clause; personal protective equipment 1 worker died due to a propane explosion when he entered a confined space, where the atmosphere had not been tested, with a lighted torch. Allied Tube and Conduit (Tyco International, Ltd.) 3 workers lost fingers or parts of fingers, and a fourth worker fractured several fingers. Their fingers were either crushed or cut by machinery at this electric wiring facility. A fifth worker was hospitalized after being pinned between a forklift and a parking cart. 1 worker died after he was crushed inside of a truck which he operated for this metal smelting and refining plant. The truck ran off the road and rolled upside down, in part because his vision was obstructed due to the truck’s design. 9 workers were hospitalized for burns due to an explosion of a natural gas pipeline. (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker was hospitalized and 4 other workers were injured due to a flash fire in a tank. Sparks from a welding or cutting operation ignited gases in a pipe that was inadequately purged at this petroleum and natural gas facility. Process safety management; standards of state-operated program 2 workers were hospitalized due to contact with a light pole that hit high voltage lines when they were reinstalling the pole for this power line construction company. Bell Helicopter Textron, Inc. (Textron, Inc.) 1 worker was killed and another hospitalized due to overexposure to sulfuric acid in a confined space. 1 worker died, 2 workers were hospitalized, due to gas exposure while doing maintenance work on a pipeline for this special trades contractor. Process safety management; personal protective equipment 1 worker was injured when he mixed together unmarked chemicals that subsequently exploded. The worker was cleaning at this poultry processing facility. Clean Harbors of Kingston, Inc. (Clean Harbors Environmental Services, Inc.) 1 worker died because his co-workers were unable to retrieve him from a tank containing a chemical sludge when his air supply ran low. He was cleaning the tank for this facility that provides refuse collection and disposal services. (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker drowned when he jumped off a barge without a life preserver because he was frightened when it began to rock back and forth. The rocking action started when a sling broke as workers were pulling pilings out of the channel for this demolition and wrecking company. Dunlop Tire Corp. (Sumitomo Rubber Industries, Ltd.) 1 worker, at this facility which produces tires, died when he placed fabric on a rotating cylinder, got caught in the machine, and asphyxiated after being wound up inside the fabric. 1 worker died from electric shock while checking fuses for this facility, which manufactures storage batteries. Exide Electronics Corp. (Exide Electronics Group, Inc.) 1 worker was hospitalized, at this company which produces transformers, due to electric shock while cleaning consoles with a liquid cleaner. The consoles were not disconnected from the power supply. 1 worker died from electric shock, at this pulp and paper mill, when a boiler precipitator within the power plant was not deenergized before he entered a confined space to work on it. Special industries; standard of state-operated program (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker died when splashed by hydrogen fluoride while he was manually dispensing the chemical from the bottom of the drum. This company produces measuring and controlling devices. Occupational health and environmental control; hazard communication standard 1 worker was burned while using a high pressure steam hot water hose while cleaning the potato peeler equipment at this food preparation facility. 1 worker died, at this facility which produces snack foods, when his neck was crushed while making adjustments to the waste conveyor system. He was working alone at this wastewater treatment plant. 1 worker was injured when a wall of an unshored trench collapsed. He was trying to install a saddle tap for this grading and pipeline company. 1 worker died when a reinforced concrete panel fell on him while he was unloading a semitruck transporting these panels to a highway construction site. Hawaii Electric Light Co., Inc. (Hawaii Electric Industries) 1 worker died from electric shock when disassembling a test transformer. The safety indicator was inoperable so he did not realize that the transformer was still energized. 1 worker was killed when a forklift ran into him as he was directing another driver into position to load and unload goods on a pier for this marine cargo handling company. (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 2 workers were killed when trapped in a fire that erupted at this coke-making facility. Their supervisor killed himself several days later. Standards of state-operated program; hazardous materials; means of egress 1 worker died when he entered a drum to replace a faulty piece of equipment at this wood products facility. The drum, which was not deenergized or locked out, was inadvertently activated and the worker fell 14 feet into the conveyor system. Keebler Co. (United Biscuits Holdings PLC) 2 workers fractured a forearm and a finger, respectively, while cleaning conveyors at this facility which makes cookies and crackers. Standards of state-operated program; lockout/tagout 1 worker died and another was hospitalized when cleaning a grain bin for this grain mill products company. Both workers were drawn down into the grain bin, and the first suffocated. 1 worker died from electric shock while removing a compactor from between two energized conductors and inadvertently coming into contact with an energized line. Electical; general safety and health provisions; power transmission and distribution 1 worker was hospitalized for head injuries when he fell 10 feet onto a concrete floor while working on reinforcing a railroad undercrossing. 1 worker died of electric shock when, for this plastering and drywall company, he mistakenly cut into electrical wiring. Pennsylvania Power & Light Co. (Pennsylvania Power & Light Resources, Inc.) (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker died of electric shock when installing underground electrical conductors in a new development. He attempted to connect a line he mistakenly thought was deenergized. General duty clause; power transmission and distribution 6 workers were hospitalized from smoke inhalation as a result of fighting a fire. Hydraulic oil caught fire at this metal smelting and refining plant. P.S.I. Energy-Gibson Generating (Cinergy Corp.) 2 workers were hospitalized due to burns. 20 workers were injured, although not hospitalized, as a result of smoke inhalation and cuts and bruises from falling debris. These workers were trying to fight the fire from a coal hopper explosion at this electrical services facility. Standard of state-operated program; personal protective equipment 1 worker died when he got caught in a bag-stacker machine while trying to free a jammed pallet without turning off the power. He inadvertently hit a switch, causing the machine to recycle at this animal feed manufacturing facility. Radiation Systems, Inc.-Univer (Comsat Corp. RSI) 1 worked died when he fell 120 feet from a platform that hit an object and tipped to the side as it was being lowered. This worker and 3 others on the platform were not tied off. This company is a special trades contractor in the construction industry. Rhone Poulenc Basic Chemicals (Rhone-Poulenc, Inc.) 1 worker died and another was hospitalized due to chemical burns when they mistakenly extracted a valve, releasing 80,000 gallons of acid sludge from a storage tank at this industrial chemicals facility. (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker died from electric shock when he reached into a press to do maintenance work and came into contact with a live electrical part. This facility manufactures protective clothing for the nuclear industry. 1 worker was hospitalized when he fell while working on a frame house for this company that builds residential buildings. The worker was leaning out from a 9-foot height while attempting to cut a roof joist when he slipped and fell to the cement porch below. 1 worker died from burns when trying to use acetone to remove standing water in a swimming pool for which he was preparing a fiberglass interior surface. The acetone vapors in the pool were ignited when he switched on a vacuum. The company is a special trades contractor. Electrical; occupational health and environmental controls 1 worker died and 2 were hospitalized from exposure to gas when one of them opened the flange of a pipeline while they were doing maintenance work at this petroleum refining facility. Process safety management; personal protective equipment 1 worker died when inflating a tire on a baggage trailer that transports luggage to and from the aircraft. The tube exploded and the rim struck the employee in the face, causing massive head injuries. The company provides airport terminal services. (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker died and another was hospitalized when trying to remove an engine from an aircraft. The 4,000-pound engine dropped on the chest of the first worker when the front chain of the mechanism used to remove the engine broke. The other worker was struck in the head by the mechanism itself. 1 worker died instantly when he was struck in the head by a 3-ton exhaust stack that was being positioned by a crane for sandblasting and painting, after being removed from a vessel. This facility is engaged in shipbuilding and repair. 10 workers were hospitalized for smoke inhalation and being struck by falling debris when a piping failure led to a petroleum explosion and fire at this petroleum refining facility. United Parcel Service (United Parcel Service Amer, Inc.) 2 workers were hospitalized from exposure to hazardous solvents that leaked from packages within the confined space of an airplane cargo hold. Means of egress; personal protective equipment 1 worker was hospitalized, and his hand and forearm amputated, when he got caught while manually feeding coil through a mechanical power press. The facility manufactures household refrigerators. 1 worker died when an object, which fell from the wall of a large vessel he was cleaning along with several other workers, crushed this worker. The facility manufactures hardwood veneer and plywood. Standard of state-operated program; walking-working surfaces (continued) Worksite (name of federal contractor if different) OSHA standard violated associated with highest actual penalty 1 worker was hospitalized from inhaling vapors released due to improper storage of chemicals at a facility which manufactures plastics and synthetic resins. Although all workers were evacuated, this worker went to search for a co-worker without personal protective equipment. Means of egress; hazardous materials; fire protection *Assessed proposed penalty of $100,000 or more for safety and health violations. =Inspection conducted by a state-operated safety and health program. Table V.1 provides information on federal contractors assessed a significant proposed penalty more than once in fiscal year 1994 for violations that occurred at different worksites owned by or associated with the same company. In a few cases, the federal contractor was assessed a significant proposed penalty more than once in fiscal year 1994 at the same or different worksites located in the same city. Our definition of a significant penalty is a proposed penalty of $15,000 or more regardless of the size of the actual penalty recorded when the inspection was closed (either because the employer accepted the citation or a contested citation was resolved). The proposed penalty is the penalty issued by OSHA in the original citation and reflects the compliance officer’s judgment of the nature and severity of violations. Inspections of these worksites are grouped by federal contractor (or parent company). The name of the federal contractor is identified if it is different from the name of the worksite where the violations occurred. Locations for the worksites inspected are provided, as well as the activity number of each inspection as assigned in IMIS. The primary industry of the worksite inspected is also provided, based on SIC codes in IMIS. Finally, the number of inspections closed in fiscal year 1994 in which a worksite owned by the same federal contractor was assessed significant proposed penalties is also provided. Table V.1: Federal Contractors Assessed Significant Proposed Penalties in More Than One Inspection Closed in Fiscal Year 1994 Location of inspection (IMIS activity number) Montgomery, IL (102997434); West Hazleton, PA (018226225) Philadelphia, PA (017999095) (018253054); Harvey, IL (103453387) Rockdale, TX (123431298); Massena, NY (106991326) Sparrows Point, MD (104383815) (119517068) Steel works, blast furnaces (including coke ovens), and rolling mills (continued) Location of inspection (IMIS activity number) Fairfield, NJ (101484780); Paterson, NJ (109043141) Pinon, AZ (002331478); Many Farms, AZ (002331486) Everett, WA (115506081); Ridley Park, PA (018253047) Rumford, ME (103392247) (102753969) (109793901); Horseshoe, ID (110502895) Enterprise, AL (109246249); Omaha, NE (109318873); Longmont, CO (100747476) San Juan, PR (106716145); Seattle, WA (109421685) Pasadena, TX (123653081); Tyler, TX (107555567) (103564449) Fort Wayne, IN (115017410); Oklahoma City, OK (108736869) Austin, TX (123579559) (123549917) Riegelwood, NC (018518670) (018518688) Hazelwood, MO (106547508); Lorain, OH (106123748) Motor vehicles and passenger car bodies (continued) Location of inspection (IMIS activity number) Allen Park, MI (110801305); Dayville, CT (109826248); Granite City, IL (103278982) Lordstown, OH (103217881) (108836552); Moraine, OH (103376422); Oklahoma City, OK (108743253); Oak Creek, WI (103472049) Brunswick, GA (109006700) (109006981); Palatka, FL (110133816); Mount Wolf, PA (109029520); Cedar Springs, GA (106213911) Tamaqua, PA (106472160); Wilmington, MA (109620831) Moss Point, MS (101391787) (101390235); Natchez, MS (107089484) (102677952); Cordele, GA (106441108); Jay, ME (018058123) Burbank, CA (001874445); Houston, TX (123652711) Winfield, KS (103164935); Tampa, FL (106491350) Elevators and moving stairways; installation or erection of building equipment (continued) Location of inspection (IMIS activity number) Griffith, IN (124068792); Tama, IA (115064248) Macon, GA (106513559); Liberal, KS (103164372); Oklahoma City, OK (108742081) Martinez, CA (111995379) (111996526) Chicago Hts, IL (101313252); Oakville, CT (109828079) Rockford, IL (122098684) (122108004) Toledo, OH (110274198); Toms River, NJ (108665050); Iowa City, IA (115054561); Springfield, MA (017828617) Deer Park, TX (123652513); Roxana, IL (106552771) Savannah, GA (106219967); Houston, TX (123653958) Jacksonville, AR (107605776) (110360427); Frenchtown, MT (100568815) (107214314); Columbia, SC (120493994) Plastics, foil, and coated paper bags; uncoated paper and multiwall bags; paperboard mills; paper mills; corrugated and solid fiber boxes (continued) Location of inspection (IMIS activity number) Cincinnati, OH (103231833) (103273041) Naknek, AK (109433052) (124072521) Savannah, GA (017403627); Franklin, VA (112394796) Elk Grove Village, IL (102992112) (103456794) (102992047) Courier services, except by air; air courier services; trucking, except local; terminal and joint terminal maintenance facilities for motor freight transportation; arrangement of transporation of freight and cargo (continued) Location of inspection (IMIS activity number) Uniondale, NY (108664079); Austin, TX (123432338); Mesquite, TX (107550857); Deerfield Beach, FL (108995697); Miami, FL (110056421); Linthicum Hts., MD (119554269); Belton, TX (123426421); Bryan, TX (123424574); Corpus Christi, TX (107433583); Laredo, TX (107434243); San Antonio, TX (123432254) Omaha, NE (109321687) (109322974) Fort Smith, AR (110354784); Evansville, IN (123970469) Hawesville, KY (123812786); Moncure, NC (111139390) In addition to those already named, the following individuals contributed to this report: Wayne J. Turowski, Computer Specialist, who provided programming support and analysis; Robert G. Crystal, Assistant General Counsel, who provided legal analysis; David Druid, Evaluator, who assisted with the audit work; Cheryl Gordon, Evaluator, who did some initial audit work; and William J. Carter-Woodbridge, Communications Analyst, who provided editing support. Worker Protection: Federal Contractors and Violations of Labor Law (GAO/HEHS-96-8, Oct. 24, 1995). OSHA: Potential to Reform Regulatory Enforcement Efforts (GAO/T-HEHS-96-42, Oct. 17, 1995). Workplace Regulation: Information on Selected Employer and Union Experiences (GAO/HEHS-94-138, Vol. I, June 30, 1994). Workplace Regulation: Information on Selected Employer and Union Experiences (GAO/HEHS-94-138, Vol. II, June 30, 1994). Occupational Safety and Health: Differences Between Programs in the United States and Canada (GAO/HRD-94-15FS, Dec. 6, 1993). Occupational Safety and Health: Changes Needed in the Combined Federal-State Approach (GAO/T-HRD-94-3, Oct. 20, 1993). Occupational Safety and Health: Uneven Protections Provided to Congressional Employees (GAO/HRD-93-1, Oct. 2, 1992). Occupational Safety and Health: Improvements Needed in OSHA’s Monitoring of Federal Agencies’ Programs (GAO/HRD-92-97, Aug. 28, 1992). Occupational Safety and Health: Worksite Safety and Health Programs Show Promise (GAO/HRD-92-68, May 19, 1992). Occupational Safety and Health: Options to Improve Hazard-Abatement Procedures in the Workplace (GAO/HRD-92-105, May 12, 1992). Occupational Safety and Health: Employers’ Experiences in Complying With the Hazard Communication Standard (GAO/HRD-92-63BR, May 8, 1992). Occupational Safety and Health: Penalties for Violations Are Well Below Maximum Allowable Penalties (GAO/HRD-92-48, Apr. 6, 1992). Occupational Safety and Health: OSHA Action Needed to Improve Compliance With Hazard Communication Standard (GAO/HRD-92-8, Nov. 26, 1991). Occupational Safety and Health: OSHA Policy Changes Needed to Confirm That Employers Abate Serious Hazards (GAO/HRD-91-35, May 8, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined federal contractors' compliance with federal occupational safety and health regulations. GAO found that: (1) federal contracts are awarded to employers violating the Occupational Safety and Health Act (OSHA); (2) in fiscal year (FY) 1994, 261 federal contractors received penalties of at least $15,000 for violating OSHA regulations; (3) 5 percent of these contractors received more than $500 million in federal contracts; (4) contract violations typically occurred at worksites with fewer than 500 employees and at manufacturing plants; (5) federal contractors received $38 billion in contract dollars for FY 1994; (6) most of the contract violations involved companies' failure to protect their workers from electrical hazard or injury; (7) the actual penalties assessed during contractor worksite inspections totalled $10.9 million; (8) in 8 percent of those inspections, the contractor received a penalty of at least $100,000; (9) some of the federal contractors participated in the OSHA Voluntary Compliance Program; (10) OSHA contracting and debarring officials use safety and health compliance information to make their award decisions; and (11) federal contractors would be more attentive to their safety and health practices if OSHA gave greater priority to those high-hazard workplaces operated by federal contractors.
More than two dozen federal agencies use grants and other mechanisms to fund research at universities and colleges, as well as at other nonprofit and for-profit organizations, in support of agency missions related to public health, energy security, and space exploration, among others. NIH provides more than half of all federal funds for university and college research, and NSF, DOE, NASA, and other agencies provide the remaining funding (see fig. 1). OSTP is responsible for advising the President on the federal budget for research and shapes research priorities across agencies with significant portfolios in science and technology. OSTP also helps develop and implement government-wide science and technology policies and coordinate interagency research initiatives. OMB is responsible for developing government-wide policies to ensure that grants—including grants for research and for other purposes such as housing, education, transportation, and health care—are managed properly and that grant funds are spent in accordance with applicable laws and regulations. For decades, OMB has published guidance in various circulars to aid grant- making agencies with such subjects as record keeping and the allowability of costs, which for research grants may include researcher salaries and wages, equipment, travel, and other costs. Congress may pass laws establishing additional reporting and oversight requirements on grant-making agencies and grantees. Funding agencies implement these requirements through regulations, agency guidance, and the terms and conditions of grant awards. In addition, funding agency offices of inspector general may conduct audits to evaluate grantee compliance with requirements. When audits result in findings of noncompliance, such as grantees charging unallowable costs to grants, grantees may need to repay funding agencies for these costs. Competitively awarded federal research grants generally follow a life cycle comprising various stages—pre-award, award, post-award implementation, and closeout. For competitive research grant programs, in the pre-award stage, a funding agency notifies the public of the grant opportunity through an announcement, and potential recipients submit applications for agency review. In the award stage, the agency identifies successful applicants and awards funding. The post-award implementation stage includes payment processing, agency monitoring, and recipient reporting, which may include financial and performance information. Grant closeout includes preparation of final reports and financial reconciliation. Over this life cycle, applicants and recipients must complete various administrative tasks in order to comply with OMB and funding agency requirements, particularly in the pre-award and post- award implementation stages. See figure 2 for an overview of the administrative tasks associated with our nine selected categories of requirements across the grant life cycle. Stakeholder organizations representing universities and federal agencies have raised concerns about the administrative workload and costs for complying with federal research requirements, and they have issued several reports with recommendations for agencies to modify requirements in order to achieve their goals while reducing administrative workload and costs. For example, in 2012, the Federal Demonstration Partnership surveyed principal investigators of federally funded research projects. The report on this survey found that principal investigators estimated they spent, on average, 42 percent of their time meeting requirements—including those associated with pre- and post-award administration and preparation of proposals and reports—rather than conducting active research. However, the survey did not specify how much of this time was due to administrative tasks driven by university- specific processes or policies rather than federal requirements, or to nonadministrative tasks that contribute to the scientific aspects of the research, such as writing scientific material for proposals and reports. In addition, the survey did not include universities’ administrative research staff members, who help universities comply with federal and other administrative requirements on research awards. In March 2013, the National Science Board issued a request for information to identify which federal agency and institutional requirements contribute most to principal investigators’ administrative workload, and conducted a series of roundtable discussions with faculty and administrators. The board found that the most frequently cited areas associated with high administrative workload included financial management, the grant proposal process, progress and other outcome reporting, and personnel management, among others. There has been a series of legislative and executive goals and directives for agencies to simplify aspects of the grants management life cycle and minimize administrative burden for grantees, particularly those that apply for and obtain grants from multiple federal agencies. Table 1 lists several of these goals and directives related to streamlining administrative grant requirements. There have also been several recent directives intended to strengthen accountability over federal funds. For example, Executive Order 13520 of November 20, 2009 adopts a set of policies for transparency and public scrutiny of significant payment errors throughout the federal government and for identifying and eliminating the highest improper payments. In response to such streamlining and accountability directives, in December 2013, OMB consolidated its grants management circulars into a single document, the Uniform Guidance. The requirements in the Uniform Guidance apply broadly to different types of grantees—including state, local, and tribal governments, institutions of higher education, and nonprofit organizations—and different types of grants—including grants for research or other purposes. The Uniform Guidance is implemented through individual federal agency regulations that were to take effect no later than December 26, 2014. When issuing the final guidance, OMB stated that it would (1) monitor the effects of the reforms in the Uniform Guidance to evaluate the extent to which the reforms were achieving their desired results for streamlining and accountability and (2) consider making further modifications as appropriate. Selected administrative requirements in OMB’s government-wide grant guidance generally focus on protecting against waste, fraud, and abuse of funds. These include requirements we selected related to competing and documenting purchases, documenting personnel expenses, preparing and managing project budgets, reporting on subawards, and monitoring subrecipients. Selected administrative requirements in agency-specific guidance generally focus on promoting the quality and effectiveness of federally funded research. These include requirements related to developing and submitting biographical sketches; mentoring and developing researchers; identifying, reporting, and managing financial conflicts of interest; and managing and sharing research data and results. OMB developed the Uniform Guidance to (1) streamline OMB’s guidance for federal awards to ease administrative burden and (2) strengthen oversight of federal funds to reduce risks of waste, fraud, and abuse. OMB developed the Uniform Guidance over more than 2 years, and it reflects input from federal agencies, auditors, and recipients of federal awards, which OMB solicited in an effort to balance its dual goals of streamlining and accountability. The Uniform Guidance includes provisions related to a range of administrative requirements on research grants, including ones we selected related to competing and documenting purchases, documenting personnel expenses, preparing and managing project budgets, subaward reporting, and subrecipient monitoring. OMB required each individual funding agency to implement the Uniform Guidance by adopting regulations that apply to the agency’s awards. See appendix II for additional information on selected requirements in the Uniform Guidance. The requirements in the Uniform Guidance aim to protect against waste, fraud, and abuse in various ways, as follows. Budgets. Funding agencies implement Uniform Guidance requirements for budget preparation and management by designing forms and processes to review applicants’ requests for funding, and grantees’ use of funding, to determine, among other things, whether costs are allowable. These requirements allow for identification of questionable requests for funding in applications or unallowable post- award charges to grants. Personnel expenses. To document personnel expenses, grantees must maintain a system of internal controls over their records used to justify the costs of salaries and wages so these records accurately reflect the work performed. Salary and wage costs generally represent the largest portion of expenditures on research grants according to agency officials, and NSF and HHS offices of inspector general have reported on the need for oversight to prevent improper or fraudulent salary charges. For example, the NSF Office of Inspector General has documented instances of researchers charging their full-time salaries to federal grants at one university while simultaneously working full- time at another university or for-profit company. Purchases. To meet documentation requirements for purchases made with grant funds, grantees must maintain records detailing the procurement history for all purchases. Funding agencies and their inspectors general use such purchasing records for oversight, including detection and prosecution of fraudulent purchases. Audit reports by the NSF and HHS offices of inspector general have found instances of researchers using grant funds for personal purchases. In addition, the Uniform Guidance requires that purchases be conducted in a manner providing full and open competition, and establishes five methods for purchasing goods or services. These methods include obtaining price or rate quotations, competitive bids, or competitive proposals for certain purchases. Subrecipients. Universities frequently collaborate with and provide federal research funds to other institutions, domestic and foreign, through subawards. Awarding agencies rely on grantees to monitor subrecipients to ensure that they use research funds for authorized purposes and stay on track to meet performance goals. In addition, requirements for grantees to report on their subawards provide agencies, Congress, and the public more information on subrecipients' use of taxpayer dollars. Funding agencies have established administrative requirements—in some cases, in response to directives from Congress and OSTP—to promote the selection and development of qualified researchers, protect against bias in the conduct of research, and improve access to research results. Agencies implement these requirements through their grants guidance documents and the terms and conditions of their awards. See appendix II for additional information on selected agency-specific requirements. Promoting the selection and development of qualified researchers. Funding agencies require applicants to submit biographical sketches so the agencies have information they need to select well-qualified researchers. All four funding agencies in our review have agency-specific requirements for biographical sketches in their grants guidance, including requirements for applicants to list information about past publications and current and prior academic or professional positions. Also, to promote the professional development of researchers, two of the four agencies have requirements related to researcher development or mentoring plans. First, as directed in the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act of 2007, NSF requires that all proposals with postdoctoral researchers include a plan describing the mentoring to be provided to these researchers. Second, NIH encourages institutions to use individual development plans to identify and promote the career goals of graduate students and postdoctoral researchers associated with NIH awards, and requires grantees using individual development plans to describe their use of these plans in annual progress reports. Protect against bias in the conduct of research. NASA, NIH, and NSF have implemented financial conflict of interest requirements to help protect against bias in the conduct of research, and DOE is in the process of establishing such requirements. For example, NIH and NSF require researchers to disclose and universities to review financial interests to identify potential conflicts, such as investments in or income from entities that might benefit from a research project. Since 1995, NIH-funded researchers have been subject to HHS financial conflict of interest regulations designed to promote objectivity. HHS revised its regulations in 2011 to address the growing size and complexity of biomedical and behavioral research and corresponding concerns about financial ties between researchers and industry—including pharmaceutical, medical device, and biotechnology companies. For example, congressional committee investigations had found cases of financial conflicts of interest that may have led to bias in NIH-funded research, including researchers failing to disclose substantial payments from drug and medical device companies. Similarly, in implementing its financial conflict of interest policy in 1994, NSF stated that it encourages the involvement of researchers and educators with industry and private entrepreneurial ventures but recognizes that these interactions are accompanied by an increased risk of conflicts of interest—a risk that its policy was intended to address. Improve access to research results. In 2013, OSTP directed federal agencies to support increased public access to the results of federally funded research, including results published in peer-reviewed journals as well as digital data. According to the OSTP directive, policies that provide greater access to peer-reviewed publications and scientific data maximize the impact and accountability of the federal research investment. In response to this directive, agencies established requirements for researchers to develop and comply with data management plans that describe the scientific data to be collected and how the researcher will provide access to and reliable preservation of the data. All four funding agencies in our review require applicants to include data management plans in their proposals. Selected universities and stakeholder organizations identified common factors that add to their administrative workload and costs for complying with selected requirements: (1) variation in agencies’ implementation of requirements, (2) detailed pre-award requirements for applicants to develop and submit documentation for grant proposals, and (3) increased prescriptiveness of certain requirements. At all six universities we selected for our review, officials told us that variation in funding agencies’ implementation of certain administrative requirements included in our review contributes to workload and costs. For example, they said variation contributes to universities’ costs because they have to design and implement multiple processes and may need to invest in electronic systems to comply with agencies’ requirements, and it contributes to the workload of researchers and administrative staff because they must spend time learning the different requirements, processes, and systems. Officials we interviewed from stakeholder organizations and the six universities cited variation in funding agencies’ implementation of three categories of requirements in particular as adding to administrative workload and costs: developing and submitting biographical sketches; identifying, reporting, and managing financial conflicts of interest; and preparing and managing project budgets. For example, the biographical sketches agencies require applicants to submit differ in formatting as well as in content, including the information applicants must provide on past publications, research collaborators, and academic positions. In addition, agencies’ financial conflict of interest requirements differ in the types of financial interests that researchers must disclose to their institutions, the information that institutions must report to agencies, and requirements for training researchers on conflicts of interest. Agency implementation of budget preparation and management requirements differs in several ways, including the forms and level of detail required in proposed budgets and the systems for grantee financial reporting. In 2014, the National Science Board reported that faculty and administrative staff participating in roundtable discussions and responding to its request for information cited a lack of consistency and standardization within and among agencies in all aspects of grant management—including regulations, guidance, reporting requirements, forms and formatting, and electronic systems—as a substantial source of administrative workload and costs, resulting in a loss of research time. Appendix III provides detailed examples of the differences in agencies’ implementation of selected requirements. University officials we interviewed cited specific examples of increases in administrative workload and costs that resulted from variation in funding agencies’ implementation of requirements: Electronic systems costs. Universities have invested in electronic grant management systems for submitting grant applications and ensuring compliance with multiple agencies’ application requirements. Variations in requirements can make it more difficult for applicants to comply, and applications can be rejected for noncompliance, including noncompliance with formatting requirements such as page lengths or fonts. Universities’ systems help minimize such rejections by identifying noncompliant application elements prior to submission, according to university officials. To address variation between NIH’s and NSF’s conflict of interest requirements, five universities in our review updated their electronic systems, for example, to allow researchers and administrative staff to differentiate the types and thresholds for financial interests required to be disclosed by different agencies, according to university officials. Administrative staff workload and costs. Officials from the six universities in our review cited examples of investments in administrative staff that they made in part to address variation in agencies’ implementation of requirements. For example, according to officials we interviewed, four universities in our review employ specific administrative staff members with specialized expertise in the policies and procedures of particular agencies to review proposals and help ensure compliance with those agencies’ requirements. Universities’ administrative staff members may also in some cases manage proposal processes for multiple agencies, so the universities need to help them build and maintain expertise in the agencies’ various application systems and requirements, according to officials. Researcher workload. Officials at the six universities in our review said that researchers must spend time learning different agencies’ requirements and customizing and reformatting application materials for different agencies. For example, according to officials at the six universities, researchers spend time customizing the content, format, and length of biographical sketches to agency-specific requirements and learning how to comply with each agency’s policies on what information to include in proposed budgets. Funding agencies require researchers to prepare detailed documentation—including proposed budgets, biographical sketches, information on subawards, data management plans, and in some cases information on conflicts of interest and researcher mentoring and development plans—and submit it to university administrators and agencies as part of the application process. Agencies require much of this information to help them select proposals for funding, according to agency officials and guidance. According to university officials we interviewed, developing this documentation is time-consuming and adds to universities’ administrative workload and costs. Moreover, the likelihood of an agency selecting a proposal for funding is relatively low. For example, in fiscal year 2015, NIH awarded funding to 18 percent of applicants and NSF awarded funding to 24 percent of applicants—similar to funding rates from other years. As a result, for most grant proposals, universities’ investment of time and resources does not result in their receiving research funding. According to officials from five of our selected universities, as well as reports from stakeholder organizations, pre-award requirements are one of the main sources of frustration and administrative workload and costs among researchers and administrative staff. The National Science Board reported in 2014 that faculty responding to its request for information cited the proposal and submission process, including preparing supporting documentation, as one of the grants management areas that contributed most to their administrative workload. For example, in response to the National Science Board’s request for information, the Federation of American Societies for Experimental Biology surveyed researchers, lab workers, and administrative staff and found that the respondents cited grant proposal preparation and submission as the greatest source of administrative burden out of 15 categories of burden in the survey. Researchers and administrative staff at the six universities in our review told us that during the pre-award stage, there can be a relatively high level of uncertainty about specific details of a research project, including detailed budget information about potential vendors or travel costs, expected research data and results, and planned contributions by postdoctoral or graduate researchers. They said that complying with agencies’ requirements to prepare and submit documents at a stage when these details remain uncertain is not an efficient use of their time. Similarly, the Federation of American Societies for Experimental Biology reported that difficulty in accurately predicting detailed research budgets when submitting a proposal was specifically raised as a source of administrative burden in comments on its survey. Recent OMB and HHS policy reforms have resulted in changes to selected requirements that have made them more prescriptive from the standpoint of universities and that, according to university officials, have added to their universities’ administrative workload and costs. Specifically, the Uniform Guidance—which was intended in part to better protect against waste, fraud, and abuse of grant funds—included revised requirements for competition and documentation of purchases that were more prescriptive than those in OMB’s prior circular that applied to universities. In addition, in 2011, HHS revised regulations governing financial conflicts of interest—which apply to research funded by NIH and several other HHS agencies—to address concerns about the objectivity of the research it funds. These revisions included more prescriptive requirements for, among other things, the types of financial interests researchers must disclose. See table 2 for requirements that have become more prescriptive under recent reforms. Officials at universities in our review stated that the more prescriptive requirements add to universities’ workload and costs when, for example, new or updated systems and processes must be implemented. Officials cited the following examples of needing to implement new or updated systems and processes to comply with the more prescriptive requirements: Officials at all six universities told us that they expect the new purchasing competition and documentation requirements—particularly the new micropurchase threshold for obtaining price or rate quotations from multiple vendors—will result in added costs for updating their electronic purchasing systems. For example, prior to the Uniform Guidance, five of the universities in our review told us that they had established a higher threshold than the Uniform Guidance for obtaining multiple quotations, and that there will be a large increase in the number of transactions exceeding the new threshold. The grantee community raised concerns to OMB about not being adequately prepared to comply with the more prescriptive purchasing requirements, and OMB delayed implementation of the purchasing requirements for 2 years. Five of the universities in our review developed and implemented a new electronic system to comply with NIH’s revised conflict of interest requirements, according to university officials. Similarly, officials from the Association of American Medical Colleges who are studying the effect of NIH’s conflict of interest requirements told us that institutions have reported incurring costs to implement processes and systems, such as financial interest-tracking software, to comply with the new requirements. Universities have had to hire and train staff to comply with more prescriptive requirements, according to officials at the six universities in our review. Officials at four universities said they expect to hire staff to handle the added workload resulting from an increased volume of purchases subject to OMB’s revised purchasing competition and documentation requirements. In addition, officials from all six universities said that they provided additional training to researchers on NIH’s conflict of interest requirements—as required by the revised rule—and officials from three universities said that each university hired an additional administrative staff member to manage the overall process for reviewing and reporting on financial conflicts of interest. In contrast with its revised purchasing requirements, OMB largely maintained existing subrecipient monitoring requirements in the Uniform Guidance. Nevertheless, according to officials from universities and stakeholder organizations we interviewed, the prescriptive nature of the subrecipient monitoring requirements adds to universities’ administrative workload and costs. Under these requirements, grantees have the flexibility to conduct some monitoring activities, such as on-site reviews or subrecipient training, as they determine appropriate based on their assessment of a subrecipient’s risk of misusing grant funds. However, the Uniform Guidance requires grantees to (1) follow-up and ensure that every subrecipient, regardless of risk, take timely and appropriate action on all deficiencies pertaining to the subaward detected through audits, on- site reviews, and other means, and (2) issue management decisions for such deficiencies. University officials we interviewed said that to meet these requirements, they may have to review audits of hundreds of subrecipients each year, including lengthy audits of state governments for subawards provided to public universities. Officials from universities and stakeholder groups we interviewed said that much of the administrative workload and costs for complying with the audit review and follow-up requirements is unnecessary, particularly for low-risk subrecipients such as those with histories of successfully conducting federally funded research. In some cases—particularly for universities subject to state requirements—the revised requirements did not substantially add to universities’ administrative workload and costs. The three public universities in our review have had to comply with state requirements related to purchasing or conflicts of interest that were already more stringent than federal requirements in some ways. For example, officials at one public university told us that the university was well-positioned to comply with NIH’s conflict of interest requirements because it already had processes in place to comply with more stringent state conflict of interest requirements. Agency officials said that some of universities’ administrative workload and costs may be due to their interpretations of requirements that are stricter than agencies intended. For example, OMB staff said grantees do not have to review audits of subrecipients’ full financial statements and internal controls systems, since the Uniform Guidance requires grantees to follow up and issue management decisions only for audit findings that are related to their subaward. However, officials from universities and stakeholder groups said that universities are concerned that they need to interpret and comply with requirements to the standards they believe agency inspectors general may apply in an audit. These officials cited recent audit reports by the HHS and NSF offices of inspector general that found universities had charged unallowable or questionable costs to research grants. Some of these audit findings stemmed from differences in how auditors, agencies and universities interpreted requirements. OMB and the four research funding agencies in our review have made continuing efforts to reduce universities’ administrative workload and costs for complying with selected requirements. These efforts have included (1) standardizing requirements across agencies, (2) streamlining pre-award requirements, and (3) in some cases allowing universities more flexibility to assess and manage risks for some requirements. In each of these areas, OMB and agency efforts have resulted in some reductions to administrative workload and costs, but these reductions have been limited. OMB and funding agencies have made several efforts to reduce grantees’ administrative workload and costs by standardizing selected requirements, in accordance with federal goals, and several of these efforts are ongoing. The Federal Financial Assistance Management Improvement Act of 1999 was enacted in part to improve the effectiveness and performance of federal financial assistance programs and facilitate greater coordination among those responsible for providing such assistance. For example, the act, which expired in 2007, required agencies to establish a common application reporting system, including uniform administrative rules for federal financial assistance programs. More recently, Executive Order 13563 called for agencies to coordinate and harmonize regulations to reduce compliance costs. In addition, in 2003 OSTP established the Research Business Models working group (RBM)—which consists of officials from DOE, NASA, NIH, NSF, and other federal research funding agencies—to facilitate coordination across these agencies. RBM’s charter calls for it to examine opportunities and develop options to unify agency research grants administration practices, and to assess and report periodically on the status, efficiency, and performance of the federal-academic research partnership. In accordance with such federal goals, OMB-led efforts to standardize selected requirements—particularly requirements for budget preparation and management—include the following: Grants.gov. In 2003, OMB created Grants.gov—a common website for federal agencies to post discretionary funding opportunities and for grantees to find and apply for them. Intended in part to simplify the grant application process and save applicants costs and time, Grants.gov allows for standard government-wide submission processes and forms for research grants. Standardization of financial and performance reporting forms. As discussed previously, in December 2013, OMB consolidated its grants management guidance into a single document, the Uniform Guidance, which established standard requirements for financial management of federal awards across the federal government. In particular, it generally requires the use of OMB-approved government-wide standard forms for reporting financial and performance information. Digital Accountability and Transparency Act pilot program. The Digital Accountability and Transparency Act of 2014 requires OMB to establish a pilot program to identify ways to standardize financial and other information that recipients of federal awards are required to report to agencies across the government, among other things. This pilot is ongoing and includes testing approaches to (1) allow grant recipients to submit financial reports in one central system and (2) develop consistent government-wide financial and other terms and definitions to simplify recipient reporting and help agencies create information collection forms. In addition, research funding agencies have led several efforts through RBM to standardize selected requirements, including the following: Federal research terms and conditions. In 2008, RBM developed a standard core set of administrative terms and conditions for research grants, which implemented OMB’s grants management guidance in effect at that time. The research terms and conditions included standard provisions related to some selected post-award requirements, such as budget management and financial reporting. In 2014, RBM began a process to develop a revised set of standard terms and conditions to apply to research grants subject to OMB’s revised requirements under the Uniform Guidance. Agency officials said they estimate that the revised standard terms and conditions will be issued in late 2016 or early 2017. Research Performance Progress Report. In 2010, RBM issued, and OSTP and OMB directed agencies to implement, the Research Performance Progress Report, a uniform format for post-award performance reporting for federally funded research projects. The report is intended to reduce recipients’ administrative workload by standardizing the types of information required in interim performance reports, such as budget information. In 2015, RBM drafted a revised version of the Research Performance Progress Report, which is to be used for both interim and final reports. SciENcv. In 2013, research funding agencies worked under RBM’s direction to develop SciENcv, a central electronic portal where researchers can assemble biographical information, intended to reduce the administrative workload and costs associated with creating and maintaining federal biographical sketches. Initially designed for NIH applications, SciENcv is currently being expanded to allow researchers to generate and maintain biographical sketches for multiple agencies, including NSF, in the formats required by those agencies. See appendix IV for more information on OMB and funding agency efforts to standardize selected administrative requirements. However, OMB’s efforts to standardize requirements did not fully address the variations in requirements, thereby limiting the potential reductions in universities’ administrative workload and costs. For example, the Uniform Guidance does not prohibit agencies from varying in their implementation of aspects of budget preparation and management requirements. Specifically, as previously discussed, the four funding agencies in our review vary in the forms and level of detail required in proposed budgets, their systems for financial reporting, and other aspects of budget preparation and management requirements. Similarly, research funding agency and OSTP efforts have not fully addressed variation in requirements. For example, (1) RBM has not initiated a process to standardize pre-award requirements (its standard terms and conditions and Research Performance Progress Report both focus on post-award requirements); (2) SciENcv provides a central system for assembling biographical sketches, but it does not provide standardized formats and content and it has not been adopted outside of NIH and NSF; and (3) RBM’s efforts to standardize research terms and conditions, both prior to and following the issuance of the Uniform Guidance, allow for agency-specific variations. For example, according to officials drafting the revised research terms and conditions, RBM considered establishing a standard 120-day deadline for institutions to submit final reports required for closing out grants—an increase over the 90-day deadline some agencies had previously established. However, the officials said that some agencies indicated they would not increase their closeout deadlines beyond 90 days. The officials said that to gain these agencies’ agreement to use the standard terms and conditions, the terms and conditions will allow deviations from the standard closeout time frames. According to OMB staff and funding agency officials, several factors can limit agencies’ ability to standardize administrative requirements on research grants. First, funding agencies must comply with differing statutory or other requirements, which can result in differences in their requirements for grantees. For example, NIH must comply with HHS’s regulations on conflict of interest requirements and is limited in how it can change its conflict of interest requirements to align with those of other agencies without HHS amending its regulations. Second, there are differences in the types of research or recipients agencies fund that can limit their ability to standardize requirements. For example, the types of data that research projects generate, and the constraints on sharing such data, can vary depending on the type of research universities are conducting. Researchers may not be able to share personally identifiable medical data as they would other types of data, for instance. These differences can limit agencies’ ability to standardize requirements related to data management and sharing, according to agency officials. Nevertheless, agencies have opportunities to standardize requirements to a greater extent than they have already done. In particular, they have flexibility in how they implement certain aspects of selected requirements that are not subject to statutory or other requirements or to agency- specific differences in types of research or grant recipients. According to some funding agency officials we interviewed, aspects of requirements where agencies have such flexibility include, for example, the format and content of biographical sketches, the budget forms and content of budget justifications that agencies require in applications, and the types of budget revisions agencies allow grantees to make without obtaining prior approval. Officials at NSF, NIH, and OSTP who co-chair RBM told us that the group has been fully occupied with ongoing efforts related to developing standard research terms and conditions and the Research Performance Progress Report. RBM officials leading these efforts said that they expect them to be complete in late 2016 or early 2017, and that RBM is well suited to pursue further efforts to standardize requirements and to report on its efforts. Such efforts could help ensure that agencies do not miss opportunities to reduce universities’ administrative workload and costs and to improve their oversight of funding and support of research quality. DOE, NASA, NIH, and NSF have made efforts to reduce pre-award administrative workload and costs associated with proposal preparation by postponing certain requirements until after a preliminary decision about an applicant’s likelihood of funding. These efforts require applicants to provide a limited set of application materials—often referred to as a preliminary proposal—for initial evaluation before possible submission of a full proposal. Preliminary proposals are intended, in part, to reduce applicants’ administrative workload and costs when applicants’ chances of success are very small. Such efforts are in line with RBM’s charter, which calls for agencies to identify approaches to streamline research grants administration practices. Furthermore, several organizations representing federal agencies and university researchers, including the National Science Board and Federation of American Societies for Experimental Biology, have recommended such efforts to streamline proposal processes. For example, according to findings from the National Science Board’s request for information, respondents suggested that much of the information agencies required at proposal submission may not be necessary, and the board recommended that agencies modify proposal requirements to include only information needed to evaluate the merit of the proposed research and make a funding determination. The funding agencies in our review implement a range of preliminary proposal processes, which can involve postponing requirements related to budget preparation, biographical sketches, data management plans, and researcher mentoring and development plans. For example, NSF’s preliminary proposals generally include a four-page project description and a one-page description of project personnel, among other elements, but may not include budgets, budget justifications, data management plans, or postdoctoral mentoring plans. NIH’s “just–in-time” process allows some elements of an application to be submitted after the application has gone through initial peer review and received a qualifying score from the peer review panel. For example, certain data management plans can be submitted at the just-in-time stage, but other information, such as budgets and biographical sketches, must generally be submitted with the initial application. In some cases, agencies use peer reviewers to evaluate preliminary proposals and make binding decisions as to whether applicants can submit full proposals. In other cases, agency program officers evaluate preliminary proposals and provide feedback either discouraging or encouraging applicants to submit full proposals. See appendix IV for more information on funding agency efforts to streamline selected pre-award administrative requirements through preliminary proposals. According to university officials, stakeholder organizations, and information from the four funding agencies in our review, efforts to postpone the timing of certain pre-award requirements have generally led to reductions in universities’ administrative workload and costs. For example, one NSF division evaluated its preliminary proposal pilot in 2014, and reported that the pilot led to reduced applicant workload by lessening the number of proposal pages researchers needed to write and simplifying the documents university administrative offices required of applicants, since preliminary proposals do not include budgets. According to NSF data, NSF received approximately 4,900 preliminary proposals in fiscal year 2014 and discouraged or barred applicants from submitting full proposals for more than 3,700 of them. As a result, those applicants avoided the administrative workload and costs of preparing full budgets and other documentation for proposals that would not be funded. Officials from the six universities in our review said that application processes that allow researchers to focus more of their pre-award time developing and describing the scientific and technical aspects of the proposed research were a more efficient use of their time than developing detailed budgets or other information that agencies may not need to make an initial funding decision and that may change by the time the research is conducted. For example, as noted above, staff at the six universities told us that budget details such as potential vendors or travel costs, or other details such as expected research data and results or planned contributions by postdoctoral researchers, are often not known with certainty at the pre- award stage. Similarly, according to findings from the National Science Board’s request for information, respondents suggested that the administrative workload of both applicants and reviewers can be substantially reduced through use of preliminary proposals and other approaches for postponing submission of information. However, agencies have not extended these pre-award streamlining efforts to all grant solicitations for which they could be used to reduce workload and costs. In addition, for certain requirements, agencies still require documentation that they may not need to effectively evaluate initial proposals. For instance, NIH’s just-in-time process does not generally postpone requirements for proposed budgets, disclosure of significant financial interests, or biographical sketches, among others— requirements that other agencies have determined are not necessary for preliminary proposals. In addition, pre-award streamlining efforts at DOE, NASA, and NSF are limited to certain offices or certain programs within the agencies, in some cases because the efforts are still in pilot phases. Partly in response to the National Science Board’s 2014 recommendations to reduce administrative workload by expanding the use of preliminary proposals or just-in-time submissions, NSF took steps to identify opportunities for expanding pre-award streamlining efforts agency-wide. Specifically, in 2015, NSF senior leadership directed officials from NSF’s directorates to review and identify options to reduce researchers’ administrative workload and costs, including by expanding use of preliminary proposals and by focusing application reviews on a minimum set of elements that are needed to meet NSF’s two merit review criteria: (1) intellectual merit and (2) broader impact, which encompasses the potential benefit to society. As a result of the directive, three NSF directorates expanded their use of preliminary proposals, for instance, by piloting efforts to postpone requirements to submit detailed budgets until proposals are recommended for award. DOE, NASA, and NIH have not conducted similar agency-wide reviews to identify opportunities for reducing administrative workload and costs by expanding their use of preliminary proposals or just-in-time submissions, according to agency officials. Such reviews may help ensure that agencies do not miss opportunities to reduce unnecessary pre-award administrative workload and costs for applicants that do not receive awards. According to funding agency officials we interviewed, preliminary proposals may not be effective in reducing administrative workload and costs for certain solicitations or certain research grant programs. For example, DOE officials said they do not use preliminary proposals for certain specialized grant programs in fields with a small number of scientists who are likely to apply. Similarly, NSF officials said that preliminary proposals can create additional workload and costs for solicitations where the large majority of applicants go on to submit full proposals. Officials from DOE and NASA also said that researchers value the opportunity for peer review and feedback on their full proposals because it helps them improve their future applications. In addition, agency regulations may establish time frames that prevent postponing certain requirements until a smaller pool of likely awardees has been identified. For instance, under HHS regulations governing NIH’s financial conflict of interest requirements, researchers who have not previously disclosed their significant financial interests must do so no later than the time of application for NIH funds. However, Executive Order 13563 directs agencies to identify and consider regulatory approaches that reduce burdens and maintain flexibility. For research grant requirements, such approaches could include modifying regulations to allow for postponing pre-award requirements. Coordinating and reporting on opportunities agencies have identified for expanded use of preliminary proposals would be in line with RBM’s charter. OMB and funding agencies have made efforts, in accordance with federal goals, to reduce administrative workload and costs by allowing universities more flexibility to assess and manage risks related to certain administrative requirements. Executive Order 13563 calls for agencies to identify and consider regulatory approaches that reduce burdens and maintain flexibility for the public. Accordingly, one of OMB’s stated objectives for its reforms in the Uniform Guidance was “focusing on performance over compliance for accountability.” For example, in its statements in the Federal Register accompanying the final Uniform Guidance, OMB reiterated its commitment to allow recipients of federal awards the flexibility to devote more effort to achieving programmatic objectives rather than complying with complex requirements, such as by reforming requirements that are overly burdensome. Efforts by OMB and the funding agencies in our review to allow universities more flexibility to assess and manage risks related to administrative requirements— particularly requirements for budget preparation and management and documentation of personnel expenses—include the following: Expanded authorities. OMB revised its grants guidance in the 1990s to allow “expanded authorities” for grant recipients. The expanded authorities allowed funding agencies to waive requirements for recipients to obtain agencies’ prior written approval before making certain changes to project budgets, such as rebudgeting funds across budget categories and carrying forward unobligated balances to later funding periods. Under RBM’s 2008 standard terms and conditions that implemented that guidance, DOE, NASA, NIH, and NSF waived many requirements for recipients to obtain prior approvals for budget revisions. Agency officials said that since the issuance of the Uniform Guidance they are continuing many of these waivers. Revised requirements for documenting personnel expenses. In the Uniform Guidance, OMB modified requirements for documenting personnel expenses to focus on establishing standards for recipients’ internal controls over salary and wage expenses, without prescribing procedures grantees must use to meet the standards. OMB expected this change to reduce grantees’ administrative workload and costs by allowing them the flexibility to use internal controls that fit their needs. In 2011, prior to the Uniform Guidance, four universities, in coordination with the Federal Demonstration Partnership and research funding agencies, began piloting a new method for documenting salary and wage charges to federal awards, known as payroll certification. OMB and the offices of inspector general at NSF and HHS agreed that the pilot would include subsequent audits by the offices of inspector general in order to evaluate the results. Modular budgets. In 1999, NIH implemented modular budgets, which generally apply to all NIH research grant applications requesting up to $250,000 per year. NIH allows recipients to request budgets in $25,000 increments—or “modules”—and decide after receiving an award whether to establish detailed budgets or to continue budgeting in $25,000 increments. In addition, under modular budgets, NIH allows applicants to provide more limited narratives to support certain budget line items than they would provide under non-modular budgets. See appendix IV for more information on OMB and funding agency efforts to allow flexibility for grantees related to selected administrative requirements. OMB’s and funding agencies’ efforts to allow universities more flexibility have led to reductions in administrative workload and costs. For instance, officials from the four funding agencies and six universities in our review generally agreed that OMB’s expanded authorities reduced grantees’ administrative workload and costs associated with post-award budget revisions. In addition, officials from both universities in our review that piloted a payroll certification system said that it resulted in over an 80 percent reduction in the number of forms that principal investigators needed to review and corresponding reductions in time needed to develop and process these forms. Officials from both universities also said the time and costs of training staff were lower under the pilot, because fewer people were responsible for certifying payroll reports than had been responsible for certifying effort reports, and the concept of payroll certification is easier to understand than effort reporting. Furthermore, agency inspector general audits of two of the universities participating in the pilot found that the universities’ implementation of payroll certification did not weaken accountability over federal funds for salaries and wages; an audit of the third university was inconclusive, and the fourth audit report had not been issued as of April 2016. In April 2016, OMB staff said other reforms in the Uniform Guidance also reduced administrative workload and costs by providing universities and other grantees more flexibility. For example, the Uniform Guidance includes provisions specifically allowing the use of fixed amount awards— grant agreements for which accountability is based primarily on performance and results rather than accounting for incurred costs—which OMB staff said can reduce administrative workload and costs, for example, for submission of invoices by the fixed amount award recipient. Also, in the Uniform Guidance, OMB clarified its prior guidance by detailing the conditions under which grantees may directly charge administrative support costs to grants—rather than being reimbursed for these costs as part of their indirect (or overhead) costs. OMB staff said this change reduced administrative workload and costs by better allowing universities to assign administrative staff to specific research projects so that researchers can focus more of their time on the scientific aspects of the projects. However, fixed amount awards and direct charging of administrative support costs were both allowed under certain circumstances prior to the Uniform Guidance, and we did not specifically discuss the reforms with universities, so we do not know to what extent universities believe the reforms reduced their administrative workload and costs. Despite efforts to allow universities more flexibility, as previously discussed, several administrative requirements—in particular, OMB requirements related to purchases and subrecipients and NIH requirements related to financial conflicts of interest—limit universities’ flexibility and require them to allocate administrative resources toward oversight of lower-risk purchases, subrecipients, and financial interests. These requirements limit universities’ flexibility in the following ways: Competition and documentation of purchases. In developing the Uniform Guidance, OMB established the micro-purchase threshold— above which grantees must generally obtain price or rate quotations, competitive bids, or competitive proposals—based on the threshold for competition of purchases made under federal contracts. University officials said that prior to the Uniform Guidance, the universities had set their thresholds based on consideration of the potential savings and administrative costs of competition or, in the case of public universities, state requirements. As previously discussed, officials at five of the universities in our review told us that they had each established a higher threshold than the Uniform Guidance for obtaining multiple quotations. Furthermore, officials from the six universities in our review said that for relatively small purchases, the administrative workload and costs associated with competition may outweigh the savings gained. Monitoring subrecipients. In developing the Uniform Guidance, OMB largely based its subrecipient monitoring requirements on those in its prior guidance and did not provide certain flexibilities to grantees to assess and manage risks. Specifically, the Uniform Guidance allows grantees to use a risk-based approach to monitor subrecipients, but it does not allow a risk-based approach to following up on audit findings that pertain to the subaward. The requirement for a university to follow up on audit findings is not risk based in that it applies to all subrecipients, regardless of their risk as assessed by the university. Officials we interviewed from the six universities in our review and stakeholder organizations generally agreed that administrative resources spent reviewing and following up on audits of low-risk subrecipients, such as those that have long track records of conducting federally funded research, could be better targeted on monitoring higher-risk subrecipients. These officials also noted that because the Uniform Guidance requires universities to review financial and performance reports and perform other project-level oversight of subrecipients, following up on audit findings may result in little added protection against improper use of funds and poor performance. OMB staff said that they have drafted an audit reporting form that universities can use to reduce the workload of reviewing subrecipients’ audit reports. However, the form had not been issued as of April 2016, and the draft form does not change the requirement for universities to follow up on audit findings for all subrecipients, regardless of risk. Identifying and managing researcher financial conflict of interest. Under the HHS regulations governing NIH’s conflict of interest requirements, researchers must disclose to their institution a range of financial interests held by them, their spouses, or their dependent children. These financial interests include investments in or income from a company involved in similar research, patents or copyrights that generate income for the researcher, or reimbursed or sponsored travel, among others. These different types of financial interests vary in the frequency with which they occur and in the risk they might pose to the integrity of the NIH-funded research. Officials we interviewed from the six universities in our review and stakeholder organizations generally agreed that the additional financial interests that must be disclosed and reviewed under the revised requirements—particularly reimbursed or sponsored travel costs, which officials said are common among academic researchers—rarely result in identification of actual conflicts that could bias their research. OMB, in developing the Uniform Guidance, and HHS, in developing the financial conflict of interest regulations that apply to NIH awards, each went through multiyear public rule-making processes and incorporated input from a range of stakeholders concerned about administrative workload and costs as well as accountability and research integrity. OMB plans to evaluate the guidance’s overall impact on burden and waste, fraud, and abuse by January 2017 to identify opportunities to enhance its effectiveness. Similarly, as stated in the final rule for its conflict of interest regulation, HHS plans to evaluate the effects of certain provisions of the regulation. Since issuing these rules, OMB and HHS, as well as stakeholder organizations, have begun collecting information on the effects of the rules that the agencies can use in their evaluations. OMB directed agencies to report, beginning in January 2015, information on their implementation of the Uniform Guidance, including metrics on the overall impact on burden and waste, fraud, and abuse. In addition, the Federal Demonstration Partnership has gathered information from member universities to report to OMB on how the Uniform Guidance purchasing requirements will affect universities’ administrative workload and costs. Similarly, the Association of American Medical Colleges has gathered information from its member institutions on how HHS’s new regulation has affected their administrative workload and costs for disclosing and reviewing financial interests, and how it has affected the number of actual conflicts of interest institutions have identified. The additional information agencies and stakeholder organizations are gathering could allow OMB and HHS to more fully consider the requirements’ effects on universities’ administrative workload and costs and balance such considerations against the requirements’ added protections for accountability and research integrity. Federal standards for internal control call for agencies to identify risks, analyze them to estimate their significance, and respond to them based on their significance and the agency’s risk tolerance. The standards also state that management may need to conduct periodic risk assessments to evaluate the effectiveness of risk response actions. Neither OMB nor HHS has specified whether its evaluation of the Uniform Guidance and financial conflict of interest regulations, respectively, will include evaluating options for targeting requirements on areas of greatest risk, particularly in the areas of competing and documenting purchases, monitoring subrecipients, or identifying and managing research conflict of interest. Evaluating such options could help universities focus administrative resources on areas of highest risk and allow researchers to maximize the time spent on conducting research versus completing administrative tasks. OMB and research funding agencies—in response to congressional or executive directives—have established administrative requirements on research grants. Such requirements help to protect against waste, fraud, and abuse of funds and to promote the quality and effectiveness of federally funded research, but they also create administrative workload and costs for universities. OMB and funding agencies have made a number of efforts to reduce workload and costs—such as by standardizing requirements across agencies, streamlining pre-award requirements, and allowing universities more flexibility to manage risks— and have had some success. However, opportunities remain for research funding agencies to achieve additional reductions in administrative workload and costs while still protecting against waste, fraud, and abuse. RBM—whose charter calls for it to examine opportunities and develop and report on options to unify and streamline agency research grants administration practices—is well suited to pursue such efforts. First, agencies have opportunities to standardize requirements through RBM to a greater extent than they have already done, by addressing variations in budget forms, biographical sketches, and conflict of interest requirements, among others. Such standardization could reduce universities’ administrative workload and costs associated with investing in systems and spending researcher and administrative staff time learning and complying with agencies’ varying requirements. Second, NSF senior leadership has called for an agency- wide review to identify options for expanding preliminary proposals or other pre-award streamlining efforts, but DOE, NASA, and NIH have not called for similar reviews. Agency-wide reviews to identify opportunities to use preliminary proposals or similar approaches where applicable could reduce administrative workload and costs associated with proposal preparation, particularly for the large majority of applicants that do not receive awards. Opportunities also remain for OMB and HHS to reduce administrative workload and costs by allowing universities more flexibility to assess and manage risks related to certain administrative requirements, as they have already done with requirements for documenting personnel expenses and preparing and managing budgets and as called for in federal streamlining directives. Specifically, (1) OMB’s planned evaluation of the Uniform Guidance presents an opportunity for OMB to consider targeting requirements for purchasing and subrecipient monitoring on areas of greatest risk to proper use of research funds and (2) HHS’s planned evaluation of its revised conflict of interest requirements presents an opportunity for HHS to consider targeting conflict of interest requirements on areas of greatest risk to research integrity. By evaluating options for targeting these requirements, OMB and HHS may identify ways to reduce universities’ administrative workload and costs while maintaining accountability over grant funds. We are making four recommendations for identifying and pursuing opportunities to streamline administrative requirements on research grants to universities. To further standardize administrative research requirements, the Secretary of Energy, the NASA Administrator, the Secretary of Health and Human Services, and the Director of NSF should coordinate through OSTP’s Research Business Models working group to identify additional areas where they can standardize requirements and report on these efforts. To reduce pre-award administrative workload and costs, particularly for applications that do not result in awards, the Secretary of Energy, the NASA Administrator, and the Secretary of Health and Human Services should conduct agency-wide reviews of possible actions, such as further use of preliminary proposals, to postpone pre-award requirements until after a preliminary decision about an applicant’s likelihood of funding and, through OSTP’s Research Business Models working group, coordinate and report on these efforts. To better target requirements on areas of greatest risk, while maintaining accountability over grant funds, the Secretary of Health and Human Services, as part of the planned evaluation of the HHS regulation governing financial conflicts of interest in NIH-funded research, should evaluate options for targeting requirements on areas of greatest risk for researcher conflicts, including adjusting the threshold and types of financial interests that need to be disclosed and the timing of disclosures, and the Director of OMB, as part of OMB’s planned evaluation of the Uniform Guidance, should evaluate options for targeting requirements for research grants to universities, including requirements for purchases and subrecipient monitoring, on areas of greatest risk for improper use of research funds. We provided a draft of this report to DOE, HHS, NASA, NSF, OMB, and OSTP. DOE, HHS—responding on behalf of NIH—and NASA provided written comments, which are reproduced in appendixes V, VI, and VII, respectively, and NSF and OMB provided oral comments. DOE, HHS, and NASA generally concurred with our findings and recommendations and provided specific comments which we discuss in more detail below. NSF and OMB did not comment on our recommendations. DOE, HHS, NSF, and OMB also provided technical comments, which we incorporated as appropriate. DOE, HHS, and NASA concurred with our first recommendation to coordinate through RBM to identify additional areas where they can standardize requirements. In their comments, the agencies said they would continue to build on RBM’s previous efforts to standardize requirements and report on their efforts according to RBM’s charter. NSF did not formally state whether it concurred with the recommendation, but NSF officials told us that research funding agencies already coordinate effectively through RBM and other groups, on such efforts as the standard research terms and conditions and the Research Performance Progress Report. However, these current efforts are expected to be complete in late 2016 or early 2017, and we continue to believe that agencies have opportunities to standardize requirements in areas that have not yet been addressed by current efforts, and achieve additional reductions in administrative workload and costs while still protecting against waste, fraud, and abuse. DOE and HHS concurred, and NASA partially concurred, with our second recommendation to conduct agency-wide reviews of possible actions to postpone pre-award requirements until after a preliminary decision about an applicant’s likelihood of funding. DOE stated that it would review pre- award requirements and coordinate through RBM to define actions to be taken to reduce burdens of these requirements, and HHS stated that NIH will review what components of grant applications are strictly needed to provide information for balanced and fair review and funding considerations, and what components can be added to the information requested during the just-in-time stage. In its technical comments, HHS stated that in 2014, NIH charged its Scientific Management Review Board to conduct an evaluation to recommend ways to further optimize the process of reviewing, awarding, and managing grants and maximize the time researchers can devote to research. In line with our second recommendation, the Board’s report also found that the use of preliminary proposals could be expanded and included a recommendation that NIH pilot test preliminary proposals. In its comments, NASA agreed to review existing documents and reports to identify best practices that postpone pre-award requirements, but stated that program offices should determine whether or not these practices are in the best interest of the program mission. We acknowledge in our report that preliminary proposals may not be effective in reducing administrative workload and costs for certain research grant programs or solicitations, and our recommendation allows for program offices to use discretion in determining what actions to take, if any, to postpone pre-award requirements until after a preliminary decision about an applicant’s likelihood of funding. HHS concurred with our third recommendation to evaluate options for targeting its financial conflict of interest requirements on areas of greatest risk for researcher conflicts. HHS stated in its comments that it has partnered with the Association of American Medical Colleges to measure the effectiveness of the financial conflict of interest requirements and identify areas that may create administrative burden. OMB did not formally state whether it concurred with our fourth recommendation to evaluate options for targeting requirements for purchases and subrecipient monitoring on areas of greatest risk for improper use of research funds. However, OMB staff told us that they agree that opportunities remain for streamlining administrative requirements. In addition, in technical comments on our draft, OMB staff stated that its grants policy applies to all types of grants and recipients— not just research grants to universities. We have revised our report to clarify that OMB’s requirements apply to all types of grants and recipients. With regard to our recommendation, it is important to note that the Uniform Guidance states that OMB may allow exceptions to requirements for classes of federal awards or recipients—for example, when doing so would expand or improve the use of effective practices in delivering federal financial assistance. We believe that our recommendation that OMB evaluate options for targeting requirements for research grants to universities could lead to such improvements for universities and potentially for other types of recipients. In particular, if implemented by OMB, our recommendation could help universities focus administrative resources on areas of highest risk and allow researchers to maximize the time spent on conducting research versus completing administrative tasks. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the National Science Foundation, the Director of the Office of Management and Budget, the Director of the Office of Science and Technology Policy, the Secretary of Energy, the Secretary of Health and Human Services, the Administrator of the National Aeronautics and Space Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report examines (1) the sources and goals of selected research grant requirements, (2) the factors that contribute to universities’ administrative workload and costs for complying with these requirements, and (3) efforts the Office of Management and Budget (OMB) and research funding agencies have made to reduce the administrative workload and costs for complying with these requirements, and the results of these efforts. To address these objectives, we selected four agencies that fund research grants to universities and focused on nine categories of requirements associated with these agencies’ research grants: The four funding agencies were the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), National Institutes of Health (NIH) within the Department of Health and Human Services, and National Science Foundation (NSF). We selected NIH and NSF because they are the two largest funders of research at universities and colleges, according to NSF data. We selected DOE and NASA as two agencies providing smaller amounts of research funding, and funding for different types of research, to universities and colleges. According to NSF data, these four agencies provided about 83 percent of federal funding for research at universities and colleges in fiscal year 2015. Our findings from our reviews of these four agencies cannot be generalized to all agencies that fund research. The nine categories of administrative requirements on research grants were (1) competition and documentation of purchases, (2) documenting personnel expenses, (3) preparing and managing project budgets, (4) subaward reporting, (5) subrecipient monitoring, (6) biographical sketches, (7) financial conflicts of interest, (8) managing and sharing research data and results, and (9) researcher mentoring and development. We chose these requirements based on several factors. In particular, we chose requirements that multiple universities and university stakeholder organizations had cited as contributing to universities’ administrative workload or costs. In addition, we chose requirements that had been the subject of recent streamlining efforts or of recent changes in OMB or funding agency guidance, or that had been part of the findings of recent reports by agency inspectors general on research grants to universities. Our findings from our reviews of these requirements cannot be generalized to all administrative requirements. See appendix II for more information on these requirements, including their definitions, sources, and goals. To examine the sources and goals of these nine categories of requirements, we reviewed documents related to establishing the requirements and any changes that had been made. These documents included public laws; Federal Register notices and other documentation related to OMB’s development of the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance); and other documentation of government- wide requirements, such as the February 2013 Office of Science and Technology Policy (OSTP) memorandum on increasing access to the results of federally funded scientific research. We also examined DOE, NASA, NIH, and NSF documents related to their implementation of the nine categories of requirements, including agency-specific guidance on grant proposal and award policies and procedures and agency regulations implementing the Uniform Guidance. To ensure the accuracy and completeness of the information we collected, we obtained input from the four agencies in our scope by obtaining their edits and additions to a matrix we prepared summarizing the sources and goals of the nine requirements. For further information, we interviewed OMB staff about the development of the Uniform Guidance, including its provisions specific to university research grants, and we interviewed DOE, NASA, NIH, and NSF officials responsible for developing research grant requirements at their agencies. We also reviewed audit reports issued by the DOE, NASA, NIH, and NSF offices of inspector general related to research grants and the nine categories of requirements included in our scope to determine how the inspectors general apply the requirements, and we interviewed office of inspector general officials from each of the four agencies. To examine factors that contribute to universities’ administrative workload and costs for complying with selected requirements, we selected a nongeneralizable sample of six universities to conduct in-depth interviews of officials regarding each of the nine categories of requirements in our scope and to collect qualitative information on the types of administrative workload and costs resulting from the requirements—such as administrative staff costs, researcher time, and investments in systems and processes. The six universities were George Mason University; Johns Hopkins University; Massachusetts Institute of Technology; University of California, Riverside; University of Massachusetts, Amherst; and University of Southern California. We selected these universities because they ranged in the amount of federal research funding they received in fiscal year 2014, as reported by NSF, and because they provided a diverse sample that included both public and private institutions and both member and nonmember institutions in the Federal Demonstration Partnership—a cooperative initiative of 10 federal agencies and 155 university recipients of federal funds that works to reduce the administrative burdens associated with research grants and contracts. We also considered whether these universities had participated in pilot streamlining efforts related to one or more of the nine categories of requirements included in our scope. At each of the six universities, we reviewed university policies for implementing federal requirements and other relevant documentation, and we interviewed officials from the central offices for administration of grants, principal investigators who led research projects funded by grants, and administrators within the academic departments where principal investigators hold positions. In particular, we discussed the officials’ views on the effects of prior, current, and proposed changes to requirements and their suggestions for streamlining requirements. For further context on universities’ administrative workload and costs, including suggestions for streamlining and views on changes to requirements, we interviewed officials from and reviewed studies conducted by the following stakeholder organizations: the Association of American Medical Colleges, Council on Governmental Relations, Federal Demonstration Partnership, Federation of American Societies for Experimental Biology, National Academy of Sciences, and National Science Board. We identified these organizations based on discussions with agency and university officials and reviews of published reports, and selected those that had studied administrative workload and costs related to our selected categories of requirements. To examine OMB and agency efforts to reduce the administrative workload and costs for complying with the requirements included in our scope and the results of these efforts, we focused on government-wide efforts led by OMB and OSTP as well as on agency-specific efforts at DOE, NASA, NIH, and NSF. We identified current and past streamlining efforts by reviewing agency documents, attending presentations by agency officials at Federal Demonstration Partnership and other public meetings, and interviewing OMB and OSTP staff as well as officials from the four research funding agencies in our scope. To determine the results of these streamlining efforts, we reviewed agency documents, including assessments of the results of their efforts, and interviewed agency and university officials. We also interviewed agency officials regarding government-wide efforts to coordinate development and implementation of requirements among agencies and the feasibility of suggestions for streamlining requirements. We interviewed OMB staff regarding their plans to review the effects of the Uniform Guidance, including the effects on universities’ administrative workload and costs, and we interviewed OSTP and agency officials on streamlining and coordination efforts by the Research Business Models working group within the National Science and Technology Council’s Committee on Science. Finally, we interviewed officials from offices of inspectors general at the four funding agencies in our scope about the potential effects of changes to requirements on the ability of grant-making agencies to ensure transparency and accountability, and about the NIH and NSF inspector general audits of a pilot program at four universities to streamline requirements for documenting personnel expenses. We conducted this performance audit from April 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 lists the sources and goals of selected administrative grant requirements. Table 4 shows examples of differences in selected administrative requirements across agencies in our review. Table 5 provides information on Office of Management and Budget (OMB) and selected funding agency efforts to standardize forms, systems, processes, and provisions related to our selected administrative requirements on research grants. The efforts listed in table 5 all share the goal of reducing universities’ and other grantees’ administrative workload and costs, according to agency officials and documents. Table 6 provides information on agency efforts to streamline selected pre- award administrative requirements, in particular by postponing certain requirements until a preliminary decision has been made about the likelihood of a proposal being funded. The efforts listed in table 4 all share the goal of reducing applicants’ administrative workload and costs for developing proposals—particularly in cases where the chance of the proposal being funded is small. Table 7 provides information on OMB and agency efforts to reduce grantees’ administrative workload and costs related to selected requirements, by allowing them more flexibility in their grant management approaches. John Neumann, (202) 512-3841 or neumannj@gao.gov. In addition to the contact named above, Joseph Cook (Assistant Director), Ellen Fried, Cindy Gilbert, Elizabeth Hartjes, Terrance Horner, Miles Ingram, Richard Johnson, Sarah Martin, Dan Royer, and Monica Savoy made key contributions to this report.
The federal government obligated over $27 billion for university research in fiscal year 2015, according to NSF. To allow for oversight of these funds, Congress and research funding agencies established administrative requirements that universities must comply with as part of grants they apply for and receive. University stakeholders have studied and raised concerns about the workload and costs to comply with the requirements. GAO was asked to review research grant requirements and their administrative workloads and costs. This report examines (1) the sources and goals of selected requirements, (2) factors affecting universities' administrative workload and costs for complying with the requirements, and (3) efforts by OMB and research funding agencies to reduce the requirements' administrative workload and costs, and the results of these efforts. GAO selected and examined in detail nine areas of administrative requirements at DOE, NASA, NIH, and NSF, and interviewed administrative staff and researchers from six universities. GAO selected agencies and universities that ranged in the amount and type of research funding provided or received. Administrative requirements for federal research grants include (1) Office of Management and Budget (OMB) government-wide grant requirements for protecting against waste, fraud, and abuse of funds and (2) agency-specific requirements generally for promoting the quality and effectiveness of federally funded research. For example, OMB requires grantees to maintain records sufficient to detail the history of procurement for all purchases made with grant funds, and the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), National Institutes of Health (NIH), and National Science Foundation (NSF) require applicants to develop and submit biographical sketches describing their professional accomplishments so agencies can consider researchers' qualifications when deciding which proposals to fund. Officials from universities and stakeholder organizations GAO interviewed identified common factors that add to their administrative workload and costs for complying with selected requirements: (1) variation in agencies' implementation of requirements, (2) pre-award requirements for applicants to develop and submit detailed documentation for grant proposals, and (3) increased prescriptiveness of certain requirements. They said that these factors add to universities' workload and costs in various ways, such as by causing universities to invest in new electronic systems or in the hiring or training of staff. For example, university officials told GAO that new OMB requirements for purchases made with grant funds will result in added costs for hiring administrative staff to handle an increased volume of purchases that are subject to some form of competition. OMB and research funding agencies have made continuing efforts to reduce universities' administrative workload and costs for complying with selected requirements, with limited results. These included efforts in three areas: (1) standardizing requirements across agencies; (2) postponing certain pre-award requirements until after making a preliminary decision about an applicant's likelihood of funding; and (3) in some cases, allowing universities more flexibility to assess and manage risks for some requirements. For example, funding agencies have developed a standard set of administrative terms and conditions for research grants and a standard form for research progress reports. Such efforts are in accordance with federal goals, such as those in a 2011 executive order that calls for agencies to harmonize regulations and consider regulatory approaches that reduce burdens and maintain flexibility. However, opportunities exist in each of the three areas to further reduce universities' administrative workload and costs. First, efforts to standardize requirements have not fully addressed variations in agency implementation of requirements, such as agencies' forms and systems for collecting project budgets and biographical sketches. Second, funding agencies have not fully examined pre-award requirements to identify those—such as requirements for detailed budgets—that can be postponed. Third, some requirements—such as those for obtaining multiple quotations for small purchases—limit universities' flexibility to allocate administrative resources toward oversight of areas at greatest risk of improper use of research funds. Further efforts to standardize requirements, postpone pre-award requirements, and allow more flexibility for universities could help ensure agencies do not miss opportunities to reduce administrative workload and costs. GAO recommends that OMB, DOE, NASA, NIH, and NSF identify additional areas where requirements, such as those for budgets or purchases, can be standardized, postponed, or made more flexible, while maintaining oversight of federal funds. DOE, NASA, and NIH generally concurred, and OMB and NSF did not comment on the recommendations.
Because of such emergencies as natural disasters, hazardous material spills, and riots, all levels of government have had some experience in preparing for different types of disasters and emergencies. Preparing for all potential hazards is commonly referred to as the “all-hazards” approach. While terrorism is a component within an all-hazards approach, terrorist attacks potentially impose a new level of fiscal, economic, and social dislocation within this nation’s boundaries. Given the specialized resources that are necessary to address a chemical or biological attack, the range of governmental services that could be affected, and the vital role played by private entities in preparing for and mitigating risks, state and local resources alone will likely be insufficient to meet the terrorist threat. Some of these specific challenges can be seen in the area of bioterrorism. For example, a biological agent released covertly might not be recognized for a week or more because symptoms may only appear several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics or vaccines to large segments of the population to prevent the spread of an infectious disease. The resources necessary to undertake these responses are generally beyond state and local capabilities and would require assistance from and close coordination with the federal government. National preparedness is a complex mission that involves a broad range of functions performed throughout government, including national defense, law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. While only the federal government is empowered to wage war and regulate interstate commerce, state and local governments have historically assumed primary responsibility for managing emergencies through police, fire-fighting, and emergency medical personnel. The federal government’s role in responding to major disasters is generally defined in the Stafford Act, which requires a finding that the disasters is so severe as to be beyond the capacity of state and local governments to respond effectively before major disaster or emergency assistance from the federal government is warranted. Once a disaster is declared, the federal government—through the Federal Emergency Management Agency (FEMA)—may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. There has been an increasing emphasis over the past decade on preparedness for terrorist events. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar- Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in the Department of Justice, Department of Energy, FEMA and Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. The attacks of September 11, 2001, as well as the subsequent attempts to contaminate Americans with anthrax, dramatically exposed the nation’s vulnerabilities to domestic terrorism and prompted numerous legislative proposals to further strengthen our preparedness and response. During the first session of the 107th Congress, several bills were introduced with provisions relating to state and local preparedness. For instance, the Preparedness Against Domestic Terrorism Act of 2001, which you co­ sponsored, Mr. Chairman, proposes the establishment of a Council on Domestic Preparedness to enhance the capabilities of state and local emergency preparedness and response. The funding for homeland security increased substantially after the attacks. According to documents supporting the president’s fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. The Congress added to this amount by passing an emergency supplemental appropriation of $40 billion dollars. According to the budget request documents, about one- quarter of that amount, nearly $9.8 billion, was dedicated to strengthening our defenses at home, resulting in an increase in total federal funding on homeland security of about 50 percent, to $29.3 billion. Table 1 compares fiscal year 2002 funding for homeland security by major categories with the president’s proposal for fiscal year 2003. We have tracked and analyzed federal programs to combat terrorism for many years and have repeatedly called for the development of a national strategy for preparedness. We have not been alone in this message; for instance, national commissions, such as the Gilmore Commission, and other national associations, such as the National Emergency Management Association and the National Governors Association, have advocated the establishment of a national preparedness strategy. The attorney general’s Five-Year Interagency Counterterrorism Crime and Technology Plan, issued in December 1998, represents one attempt to develop a national strategy on combating terrorism. This plan entailed a substantial interagency effort and could potentially serve as a basis for a national preparedness strategy. However, we found it lacking in two critical elements necessary for an effective strategy: (1) measurable outcomes and (2) identification of state and local government roles in responding to a terrorist attack. In October 2001, the president established the Office of Homeland Security as a focal point with a mission to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. While this action represents a potentially significant step, the role and effectiveness of the Office of Homeland Security in setting priorities, interacting with agencies on program development and implementation, and developing and enforcing overall federal policy in terrorism-related activities is in the formative stages of being fully established. The emphasis needs to be on a national rather than a purely federal strategy. We have long advocated the involvement of state, local, and private-sector stakeholders in a collaborative effort to arrive at national goals. The success of a national preparedness strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. To develop this essential national strategy, the federal role needs to be considered in relation to other levels of government, the goals and objectives for preparedness, and the most appropriate tools to assist and enable other levels of government and the private sector to achieve these goals. Although the federal government appears monolithic to many, in the area of terrorism prevention and response, it has been anything but. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 federal entities in bioterrorism alone. One of the areas that the Office of Homeland Security will be reviewing is the coordination among federal agencies and programs. Concerns about coordination and fragmentation in federal preparedness efforts are well founded. Our past work, conducted prior to the creation of the Office of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. There had been no single leader in charge of the many terrorism- related functions conducted by different federal departments and agencies. In fact, several agencies had been assigned leadership and coordination functions, including the Department of Justice, the Federal Bureau of Investigation, FEMA, and the Office of Management and Budget. We previously reported that officials from a number of agencies that combat terrorism believe that the coordination roles of these various agencies are not always clear. The recent Gilmore Commission report expressed similar concerns, concluding that the current coordination structure does not provide the discipline necessary among the federal agencies involved. In the past, the absence of a central focal point resulted in two major problems. The first of these is a lack of a cohesive effort from within the federal government. For example, the Department of Agriculture, the Food and Drug Administration, and the Department of Transportation have been overlooked in bioterrorism-related policy and planning, even though these organizations would play key roles in response to terrorist acts. In this regard, the Department of Agriculture has been given key responsibilities to carry out in the event that terrorists were to target the nation’s food supply, but the agency was not consulted in the development of the federal policy assigning it that role. Similarly, the Food and Drug Administration was involved with issues associated with the National Pharmaceutical Stockpile, but it was not involved in the selection of all items procured for the stockpile. Further, the Department of Transportation has responsibility for delivering supplies under the Federal Response Plan, but it was not brought into the planning process and consequently did not learn the extent of its responsibilities until its involvement in subsequent exercises. Second, the lack of leadership has resulted in the federal government’s development of programs to assist state and local governments that were similar and potentially duplicative. After the terrorist attack on the federal building in Oklahoma City, the federal government created additional programs that were not well coordinated. For example, FEMA, the Department of Justice, the Centers for Disease Control and Prevention, and the Department of Health and Human Services all offer separate assistance to state and local governments in planning for emergencies. Additionally, a number of these agencies also condition receipt of funds on completion of distinct but overlapping plans. Although the many federal assistance programs vary somewhat in their target audiences, the potential redundancy of these federal efforts warrants scrutiny. In this regard, we recommended in September 2001 that the president work with the Congress to consolidate some of the activities of the Department of Justice’s Office for State and Local Domestic Preparedness Support under FEMA. State and local response organizations believe that federal programs designed to improve preparedness are not well synchronized or organized. They have repeatedly asked for a one-stop “clearinghouse” for federal assistance. As state and local officials have noted, the multiplicity of programs can lead to confusion at the state and local levels and can expend precious federal resources unnecessarily or make it difficult for them to identify available federal preparedness resources. As the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds and have argued that the application process is burdensome and inconsistent among federal agencies. Although the federal government can assign roles to federal agencies under a national preparedness strategy, it will also need to reach consensus with other levels of government and with the private sector about their respective roles. Clearly defining the appropriate roles of government may be difficult because, depending upon the type of incident and the phase of a given event, the specific roles of local, state and federal governments and of the private sector may not be separate and distinct. A new warning system, the Homeland Security Advisory System, is intended to tailor notification of the appropriate level of vigilance, preparedness and readiness in a series of graduated threat conditions. The Office of Homeland Security announced the new warning system on March 12, 2002. The new warning system includes five levels of alert for assessing the threat of possible terrorist attacks: low, guarded, elevated, high and severe. These levels are also represented by five corresponding colors: green, blue, yellow, orange, and red. When the announcement was made, the nation stood in the yellow condition, in elevated risk. The warning can be upgraded for the entire country or for specific regions and economic sectors, such as the nuclear industry. The system is intended to address a problem with the previous blanket warning system that was used. After September 11th, the federal government issued four general warnings about possible terrorist attacks, directing federal and local law enforcement agencies to place themselves on the “highest alert.” However, government and law enforcement officials, particularly at the state and local levels, complained that general warnings were too vague and a drain on resources. To obtain views on the new warning system from all levels of government, law enforcement, and the public, the Attorney General, who will be responsible for the system, provided a 45-day comment period from the announcement of the new system on March 12th. This provides an opportunity for state and local governments as well as the private sector to comment on the usefulness of the new warning system, and the appropriateness of the five threat conditions with associated suggested protective measures. Numerous discussions have been held about the need to enhance the nation’s preparedness, but national preparedness goals and measurable performance indicators have not yet been developed. These are critical components for assessing program results. In addition, the capability of state and local governments to respond to catastrophic terrorist attacks is uncertain. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures; (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments; (3) ensuring that goals at lower organizational levels align with and support general goals; and (4) identifying the resources that will be required to achieve the goals. A former assistant professor of public policy at the Kennedy School of Government, now the senior director for policy and plans with the Office of Homeland Security, noted in a December 2000 paper that a preparedness program lacking broad but measurable objectives is unsustainable. This is because it deprives policymakers of the information they need to make rational resource allocations, and program managers are prevented from measuring progress. He recommended that the government develop a new statistical index of preparedness,incorporating a range of different variables, such as quantitative measures for special equipment, training programs, and medicines, as well as professional subjective assessments of the quality of local response capabilities, infrastructure, plans, readiness, and performance in exercises. Therefore, he advocated that the index should go well beyond the current rudimentary milestones of program implementation, such as the amount of training and equipment provided to individual cities. The index should strive to capture indicators of how well a particular city or region could actually respond to a serious terrorist event. This type of index, according to this expert, would then allow the government to measure the preparedness of different parts of the country in a consistent and comparable way, providing a reasonable baseline against which to measure progress. In October 2001, FEMA’s director recognized that assessments of state and local capabilities have to be viewed in terms of the level of preparedness being sought and what measurement should be used for preparedness. The director noted that the federal government should not provide funding without assessing what the funds will accomplish. Moreover, the president’s fiscal year 2003 budget request for $3.5 billion through FEMA for first responders—local police, firefighters, and emergency medical professionals—provides that these funds be accompanied by a process for evaluating the effort to build response capabilities, in order to validate that effort and direct future resources. FEMA has developed an assessment tool that could be used in developing performance and accountability measures for a national strategy. To ensure that states are adequately prepared for a terrorist attack, FEMA was directed by the Senate Committee on Appropriations to assess states’ response capabilities. In response, FEMA developed a self-assessment tool—the Capability Assessment for Readiness (CAR)—that focuses on 13 key emergency management functions, including hazard identification and risk assessment, hazard mitigation, and resource management. However, these key emergency management functions do not specifically address public health issues. In its fiscal year 2001 CAR report, FEMA concluded that states were only marginally capable of responding to a terrorist event involving a weapon of mass destruction. Moreover, the president’s fiscal year 2003 budget proposal acknowledges that our capabilities for responding to a terrorist attack vary widely across the country. Many areas have little or no capability to respond to a terrorist attack that uses weapons of mass destruction. The budget proposal further adds that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Proposed standards have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and are currently being pilot tested through the Emergency Management Accreditation Program at the state and local levels. Its purpose is to establish minimum acceptable performance criteria by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement that (1) the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. Although FEMA has experience in working with others in the development of assessment tools, it has had difficulty in measuring program performance. As the president’s fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA’s Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. Since September 11th, many state and local governments have faced declining revenues and increased security costs. A survey of about 400 cities conducted by the National League of Cities reported that since September 11th, one in three American cities saw their local economies, municipal revenues, and public confidence decline while public-safety spending is up. Further, the National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. State and local revenue shortfalls coupled with increasing demands on resources makes it more critical that federal programs be designed carefully to match the priorities and needs of all partners—federal, state, local and private. Our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have at their disposal a variety of policy instruments, such as grants, regulations, tax incentives, and regional coordination and partnerships, that they can use to motivate or mandate other levels of government and private-sector entities to take actions to address security concerns. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Key to the national effort will be determining the appropriate level of funding so that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while also protecting against federal funds being used to substitute for spending that would have occurred anyway. The federal government often uses grants to state and local governments as a means of delivering federal programs. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad national purpose and to provide a great deal of discretion to state and local officials. Either type of grant can be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as “supplantation,” with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. More specifically: Targeting: The formula for the distribution of any new grant could be based on several considerations, including the state or local government’s capacity to respond to a disaster. This capacity depends on several factors, the most important of which perhaps is the underlying strength of the state’s tax base and whether that base is expanding or is in decline. In an August 2001 report on disaster assistance, we recommended that the director of FEMA consider replacing the per-capita measure of state capability with a more sensitive measure, such as the amount of a state’s total taxable resources, to assess the capabilities of state and local governments to respond to a disaster. Other key considerations include the level of need and the costs of preparedness. Maintenance of effort: In our earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. Our recent work on the Temporary Assistance to Needy Families block grant found that a strong maintenance of effort provision limits states’ ability to supplant. Recipients can be penalized for not meeting a maintenance-of-effort requirement. Balance accountability and flexibility: Experience with block grants shows that such programs are sustainable if they are accompanied by sufficient information and accountability for national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be established for measured results and outcomes that permitting greater flexibility in how funds are used while at the same time ensuring some national oversight. Grants previously have been used for enhancing preparedness and recent proposals direct new funding to local governments. In recent discussions, local officials expressed their view that federal grants would be more effective if local officials were allowed more flexibility in the use of funds. They have suggested that some funding should be allocated directly to local governments. They have expressed a preference for block grants, which would distribute funds directly to local governments for a variety of security-related expenses. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president’s fiscal year 2003 budget, have included some of these provisions. This matching grant would be administered by FEMA, with 25 percent being distributed to the states based on population. The remainder would go to states for pass-through to local jurisdictions, also on a population basis, but states would be given the discretion to determine the boundaries of sub-state areas for such a pass-through—that is, a state could pass through the funds to a metropolitan area or to individual local governments within such an area. Although the state and local jurisdictions would have discretion to tailor the assistance to meet local needs, it is anticipated that more than one- third of the funds would be used to improve communications; an additional one-third would be used to equip state and local first responders, and the remainder would be used for training, planning, technical assistance, and administration. Federal, state and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, highways, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors (for example, for chemical and nuclear facilities). In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Five models of shared regulatory authority are: Fixed federal standards that preempt all state regulatory action in the Federal minimum standards that preempt less stringent state laws but permit states to establish standards that are more stringent than the federal; Inclusion of federal regulatory provisions not established through preemption in grants or other forms of assistance that states may choose to accept; Cooperative programs in which voluntary national standards are formulated by federal and state officials working together; Widespread state adoption of voluntary standards formulated by quasi- official entities. Any one of these shared regulatory approaches could be used in designing standards for preparedness. The first two of these mechanisms involve federal preemption. The other three represent alternatives to preemption. Each mechanism offers different advantages and limitations that reflect some of the key considerations in the federal-state balance. To the extent that private entities will be called upon to improve security over dangerous materials or to protect vital assets, the federal government can use tax incentives to encourage and enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Promoting partnerships between critical actors (including different levels of government and the private sector) facilitates the maximizing of resources and also supports coordination on a regional level. Partnerships could encompass federal, state, and local governments working together to share information, develop communications technology, and provide mutual aid. The federal government may be able to offer state and local governments assistance in certain areas, such as risk management and intelligence sharing. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats in their communities. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the United States Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation’s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S1615, H.R. 3285) that would provide for information sharing among federal, state and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information sharing needs. For instance, this proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them, and state and local officials who participate in federal counter-terrorism working groups or regional terrorism task forces. Local officials have emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality’s supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in response to an emergency. Because individual jurisdictions may not have all the resources they need to respond to all types of emergencies, these agreements allow for resources to be deployed quickly within a region. The terms of mutual aid agreements vary for different services and different localities. These agreements may provide for the state to share services, personnel, supplies, and equipment with counties, towns, and municipalities within the state, with neighboring states, or, in the case of states bordering Canada, with jurisdictions in another country. Some of the agreements also provide for cooperative planning, training, and exercises in preparation for emergencies. Some of these agreements involve private companies and local military bases, as well as local government entities. Such agreements were in place for the three sites that were involved on September 11th— New York City, the Pentagon, and a rural area of Pennsylvania—and provide examples of some of the benefits of mutual aid agreements and of coordination within a region. With regard to regional planning and coordination, there may be federal programs that could provide models for funding proposals. In the 1962 Federal-Aid Highway Act, the federal government established a comprehensive cooperative process for transportation planning. This model of regional planning continues today under the Transportation Equity Act for the 21st century (TEA-21, originally ISTEA) program. This model emphasizes the role of state and local officials in developing a plan to meet regional transportation needs. Metropolitan Planning Organizations (MPOs) coordinate the regional planning process and adopt a plan, which is then approved by the state. Mr. Chairman, in conclusion, as increasing demands are placed on budgets at all levels of government, it will be necessary to make sound choices to maintain fiscal stability. All levels of government and the private sector will have to communicate and cooperate effectively with each other across a broad range of issues to develop a national strategy to better target available resources to address the urgent national preparedness needs. Involving all levels of government and the private sector in developing key aspects of a national strategy that I have discussed today - a definition and clarification of the appropriate roles and responsibilities, an establishment of goals and performance measures, and a selection of appropriate tools— is essential to the successful formulation of the national preparedness strategy and ultimately to preparing and defending our nation from terrorist attacks. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-6787, Paul Posner at (202) 512-9573, or JayEtta Hecker at (202) 512- 2834. Other key contributors to this testimony include Jack Burriesci, Matthew Ebert, Colin J. Fallon, Thomas James, Kristen Sullivan Massey, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
Federal, state, and local governments share responsibility in preparing for catastrophic terrorist attacks. Because the national security threat is diffuse and the challenge is highly intergovernmental, national policymakers must formulate strategies with a firm understanding of the interests, capacity, and challenges in addressing these issues. Key aspects of this strategy should include a definition and clarification of the appropriate roles and responsibilities of federal, state, and local entities. GAO has found fragmentation and overlap among federal assistance programs. More than 40 federal entities have roles in combating terrorism, and past federal efforts have resulted in a lack of accountability, a lack of cohesive effort, and program duplication. As state and local officials have noted, this situation has led to confusion, making it difficult to identify available federal preparedness resources and effectively partner with the federal government. Goals and performance measures should be established to guide the nation's preparedness efforts. For the nation's preparedness programs, however, outcomes have yet to be defined in terms of domestic preparedness. Given the recent and proposed increases in preparedness funding, real and meaningful improvements in preparedness and establishing clear goals and performance measures are critical to ensuring a successful and a fiscally responsible effort. The strategy should include a careful choice of the most appropriate tools of government to best achieve national goals. The choice and design of policy tools, such as grants, regulations, and partnerships, can enhance the government's capacity to (1) target areas of highest risk to better ensure that scarce federal resources address the most pressing needs, (2) promote shared responsibility by all parties, and (3) track and assess progress toward achieving national goals.
The traditional Medicare program does not have a comprehensive outpatient prescription drug benefit, but under part B (which covers physician and other outpatient services), it covers roughly 450 pharmaceutical products and biologicals. In 1999, spending for Medicare part B-covered prescription drugs totaled almost $4 billion. A small number of products accounts for the majority of Medicare spending and billing volume for part B drugs. In 1999, 35 drugs accounted for 82 percent of Medicare spending and 95 percent of the claims volume for these products. The 35 products included, among others, injectible drugs to treat cancer, inhalation therapy drugs, and oral immunosuppressive drugs (such as those used to treat organ transplant patients). The physician-billed drugs accounted for the largest share of program spending, while pharmacy supplier-billed drugs constituted the largest share of the billing volume. Three specialties—hematology oncology, medical oncology, and urology—submitted claims for 80 percent of total physician billings for part B drugs. Two inhalation therapy drugs accounted for 88 percent of the Medicare billing volume for pharmacy- supplied drugs administered in a patient’s residence. Medicare’s payment for part B-covered drugs is based on the product’s AWP, which is a price assigned by the product’s manufacturer and may be neither “average” nor “wholesale.” Instead, the AWP is often described as a “list price,” “sticker price,” or “suggested retail price.” The term AWP is not defined in law or regulation, so the manufacturer is free to set an AWP at any level, regardless of the actual price paid by purchasers. Manufacturers periodically report AWPs to publishers of drug pricing data, such as the Medical Economics Company, Inc., which publishes the Red Book, and First Data Bank, which compiles the National Drug Data File. In paying claims, Medicare carriers use published AWPs to determine Medicare’s payment amount, which is 95 percent of AWP.Thus, given the latitude manufacturers have in setting AWP, these payments may be unrelated to market prices that physicians and suppliers actually pay for the products. The actual price that providers pay for Medicare part B drugs is often not transparent. Physicians and suppliers may belong to group purchasing organizations (GPO) that pool the purchasing of multiple entities to negotiate prices with wholesalers or manufacturers. GPOs may negotiate different prices for different purchasers, such as physicians, suppliers, or hospitals. In addition, providers can purchase part B-covered drugs from general or specialty pharmaceutical wholesalers or can have direct purchase agreements with manufacturers. Certain practices involving these various entities can result in prices paid at the time of sale that do not reflect the final net cost to the purchaser. Manufacturers or wholesalers may offer purchasers rebates based on the volume of products purchased not in a single sale but over a period of time. Manufacturers may also establish “chargeback” arrangements for end purchasers, which result in wholesalers’ prices overstating what those purchasers pay. Under these arrangements, the purchaser negotiates a price with the manufacturer that is lower than the price the wholesaler charges for the product. The wholesaler provides the product to the purchaser for the lower negotiated price, and the manufacturer then pays the wholesaler the difference between the wholesale price and the negotiated price. For the part B-covered drugs accounting for the bulk of Medicare spending and claims, Medicare payments in 2001 were almost always considerably higher than wholesalers’ prices that were widely available to physicians and suppliers. This was true regardless of whether the drugs had competing products or were available from a single manufacturer. Physicians who billed Medicare for relatively small quantities of these drugs also obtained similar prices. Our study shows that there can be wide disparities between a drug’s estimated acquisition cost and Medicare’s payment for that drug. Physician-billed drugs account for the bulk of Medicare spending on part B drugs. Of those billed by physicians, drugs used to treat cancer accounted for most of Medicare’s expenditures. Specifically: Widely available discounts for 17 of the physician-billed drugs we examined averaged between 13 percent and 34 percent less than AWP. For two other physician-billed drugs, Dolasetron mesylate and Leucovorin calcium, average discounts were considerably larger—65 percent and 86 percent less than AWP. The discounts on physician-billed drugs, based on wholesaler and GPO catalogue prices, are notably lower than Medicare’s payment, which reflects a discount of 5 percent below AWP. The discounts indicate that Medicare’s payments for these drugs were at least $532 million higher than providers’ acquisition costs in 2000. Further, the discounts we report may only be the starting point for additional discounts provided to certain purchasers, as chargebacks, rebates, and other discounts may drive down the final sale price. Concerns have been expressed that small providers either could not or do not obtain such favorable prices. Therefore, we surveyed a sample of physicians who billed Medicare for low volumes of chemotherapy drugs to see if they were able to obtain similar discounts. All of the low-volume purchasers who responded to our survey reported obtaining similar or better discounts than the widely available prices we had documented. More than one-third of these physicians reported belonging to GPOs and obtained the GPOs’ substantial discounts, while others said they had contracts with manufacturers and wholesalers. As with physician-billed drugs, Medicare’s payments for pharmacy supplier-billed drugs generally far exceeded the prices available to these suppliers. For the drugs we examined, Medicare’s payments were at least $483 million more than what the suppliers paid in 2000. Further, the discounts we report were largest for products that could be obtained from more than one source. Inhalation therapy drugs administered through DME and oral immunosuppressive drugs represent most of the high- expenditure, high-volume drugs billed to Medicare by suppliers. Specifically: Two drugs, albuterol and ipratropium bromide, used with DME for respiratory conditions, account for most of the pharmacy-supplied drugs paid for by Medicare. In 2001, they were available to pharmacy suppliers at prices that averaged, respectively, 85 percent and 78 percent less than AWP. Other high-volume DME-administered drugs had prices averaging 69 percent and 72 percent less than AWP. These findings are consistent with prior studies of the prices of similar drugs. Two of the four high-volume oral immunosuppressives were available from wholesalers with average discounts of 14 percent and 77 percent. Wholesale price information on the other two was not available, but retail prices from online pharmacies were as much as 13 percent and 8 percent below AWP. Medicare payment policies for administering or delivering a drug vary, depending on who provides the drug to the patient. Physicians are compensated directly for drug administration through the physician fee schedule. Pharmacy suppliers are compensated for dispensing inhalation therapy drugs used with a nebulizer, which make up the majority of their part B drug claims. No explicit payments are made to pharmacy suppliers for dispensing other drugs, but they may receive payments for equipment and supplies associated with DME-administered drugs. Both physicians and pharmacy suppliers contend that the excess in Medicare’s payments for part B-covered drugs compensates for related service costs inadequately reimbursed or not explicitly covered at all. In prior work on the Medicare physician fee schedule, we concluded that the agency’s basic method of computing practice expense payments to physicians was sound. The implementation of this fee schedule, however, has been controversial. The Congress required that payments be budget neutral relative to prior spending. Medicare’s physician payments were, in the aggregate, seemingly adequate, as most physicians were participating in Medicare and accepting the program’s fees as payment in full. Because of the budget neutrality requirement, if one specialty’s fees increased on average, some others would have to decline. Such redistributions have occurred and some are significant. Oncologists, who represent the majority of physicians billing for drugs, argue that Medicare’s payments for administering chemotherapy are inappropriately low and that the excess Medicare drug payments are needed to offset their losses. Yet oncology is one of the specialties to gain under the resource-based physician fee schedule. In our separate study on physicians’ practice expenses under Medicare’s fee schedule, we will show that payments to oncologists were 8 percent higher than they would have been if the prior charge-based payment method had been maintained; the study will also show that oncologists’ payments relative to their estimated practice expenses, which include chemotherapy administration, were close to the average for all specialties. While oncologists do not appear disadvantaged overall under the fee schedule, adjustments HCFA made to the basic method of computing payments reduced fees for some oncologists’ services. In those adjustments, HCFA modified the basic method in computing payments for services delivered without direct physician involvement, like much of chemotherapy administration. The modifications were intended to correct for perceived low payments for these services. While they increased payments for some of these services, they lowered them for many others. Moreover, they increased payments on average for services involving physicians. Oncology payments were particularly affected, as services without physician involvement constitute about one-third of oncologists’ Medicare-billed services, compared to about 5 percent of all physician- billed services. Because of the modifications to the basic method, oncology practice expense payments for nonphysician chemotherapy administration were on average 15 percent lower, while payments for physician-administered services were 1 percent higher, than if HCFA had used the basic method. Across all services, the modifications resulted in oncology practice expense payments that were 6 percent lower. Using the basic method for all services would eliminate these reductions and add about $31 million to oncology payments. Our study will recommend that CMS revert to the use of the basic methodology to determine practice expense payments for all services. We will also recommend that CMS address a data adjustment it made that affects oncology payments under the new fee schedule. The agency reduced oncology’s reported supply expenses to keep from paying twice for drugs that are reimbursed separately by Medicare. Oncologists acknowledge that the supply expense estimate needed to be reduced, but argue that the reduction was too large. We have recommended that the agency develop the appropriate data to more accurately estimate oncology supply expenses. Substituting a supply expense estimate based on a methodology developed by the American Society of Clinical Oncology would raise practice expense payments an additional $20 million, if done in conjunction with our recommendation to use the basic method to calculate payments for all services. Oncologists have raised concerns about whether the data used to estimate their practice expenses constituted a representative sample of practices surveyed and whether these data reflect current practices in delivering services. How improvements in the data to estimate practice expenses may affect payment levels is uncertain. Payments are based on the differences in expenses of services of one specialty compared to those of others. Some of the data concerns raised by oncologists may apply to other specialties as well, so that additional and more current data may reveal that the relative cost of one service compared to others may have changed only modestly. We are conducting a separate study to determine how CMS can improve and update the information used to estimate specialties’ practice expenses. Similar to the physicians who bill for part B drugs, pharmacy suppliers and their representatives contend that the margin on the Medicare drug payment is needed to compensate them for costs not covered by Medicare—that is, the clinical, administrative, and other labor costs associated with delivering the drug. These include costs for billing and collection; facility and employee accreditation; licensing and certifications; and providing printed patient education materials. Medicare pays a dispensing fee of $5.00 for inhalation therapy drugs used with a nebulizer, which are the vast majority of the pharmacy-supplied drugs. This fee was instituted in 1994. It is higher than dispensing fees paid by pharmacy benefit managers, which average around $2.00, and is comparable to many state Medicaid programs, which range from $2.00 to over $6.00. For other pharmacy-supplied drugs, Medicare makes no explicit payment for dispensing the drug. Besides the profits on the DME-related drugs, pharmacy suppliers may receive additional compensation through the payment for DME and related supplies. Our prior work suggests that, for two reasons, Medicare DME and supply payments may exceed market prices. First, because of an imprecise coding system, Medicare carriers cannot determine from the DME claims they process which specific products the program is paying for. Medicare pays one fee for all products classified under a single billing code, regardless of whether their market prices are greatly below or above that fee. Second, DME fees are often out of line with current market prices. Until recently, DME fees had generally been adjusted only for inflation because the process required to change the fees was lengthy and cumbersome. As a result, payment levels may not reflect changes in technology and other factors that could significantly change market prices. Private insurers and federal agencies, such as VA, employ different approaches in paying for or purchasing drugs that may provide useful lessons for Medicare. In general, these payers make use of the leverage of their volume and competition to secure better prices. The federal purchasers, furthermore, use that leverage to secure verifiable data on actual market transactions to establish their price schedules. Private payers can negotiate with some suppliers to the exclusion of others and arrive at terms without clear criteria or a transparent process. This practice would not be easily adaptable to Medicare, given the program’s size and need to ensure access for providers and beneficiaries. How other federal agencies have exercised their leverage may be more instructive and readily adaptable for Medicare. VA and certain other government purchasers buy drugs based on actual prices paid by private purchasers—specifically, on the prices that drug manufacturers charge their “most-favored” private customers. In exchange for being able to sell their drugs to state Medicaid programs, manufacturers agree to offer VA and other government purchasers drugs at favorable prices, known as Federal Supply Schedule (FSS) prices. So that VA can determine the most-favored customer price, manufacturers provide information on price discounts and rebates offered to domestic customers and the terms and conditions involved, such as length of contract periods and ordering and delivery practices. (Manufacturers must also be willing to supply similar information to CMS to support the data on the average manufacturer’s price, known as AMP, and best price they report for computing any rebates required by the Medicaid program.) VA has been successful in using competitive bidding to obtain even more favorable prices for certain drugs. Through these competitive bids, VA has obtained national contracts for selected drugs at prices that are even lower than FSS prices. These contracts seek to concentrate the agency’s purchase on one drug within therapeutically equivalent categories for the agency’s national formulary. In 2000, VA contract prices averaged 33 percent lower than corresponding FSS prices. Medicare’s use of competition has been restricted to several limited-scale demonstration projects authorized by the Balanced Budget Act of 1997. In one of these demonstrations under way in San Antonio, Texas, suppliers bid to provide nebulizer drugs, such as albuterol, to Medicare beneficiaries. While Medicare normally allows any qualified provider to participate in the program, under the demonstration only 11 bidders for nebulizer drugs were selected to participate. In exchange for restricting their choice of providers to the 11 selected, beneficiaries are not liable for any differences between what suppliers charge and what Medicare allows. Preliminary CMS information on the San Antonio competitive bidding demonstration suggests no reported problems with access and a savings of about 26 percent realized for the inhalation drugs. Our study on Medicare payments for part B drugs shows that Medicare pays providers much more for these drugs than necessary, given what the providers likely paid to purchase these drugs from manufacturers, wholesalers, or other suppliers. Unlike the market-based fees paid by VA and other federal agencies, Medicare’s fees are based on AWP, which is a manufacturer-reported price that is not based on actual transactions between seller and purchaser. Physicians contend that the profits they receive from Medicare’s payments for part B drugs are needed to compensate for inappropriately low Medicare fees for most drug administration services. Similarly, the case argued by some pharmacy suppliers for Medicare’s high drug payments is that not all of their costs of providing the drugs are covered.
The pricing of Medicare's part B-covered prescription drugs--largely drugs that cannot be administered by patients themselves--has been under scrutiny for years. Most of the part B drugs with the highest Medicare payments and billing volume fall into three categories: those that are billed for by physicians and typically provided in a physician office setting, those that are billed for by pharmacy suppliers and administered through a durable medical equipment (DME) item, and those that are also billed by pharmacy suppliers but are patient-administered and covered explicitly by statute. Studies show that Medicare sometimes pays physicians and other providers significantly more than their actual costs for the drugs. In September 2000, the Health Care Financing Administration's (HCFA)--now the Centers for Medicare and Medicaid Services--took steps to reduce Medicare's payment for part B-covered drugs by authorizing Medicare carriers, the contractors that pay part B claims, to use prices obtained in the Justice Department investigations of providers' drug acquisition costs. HFCA retracted this authority in November 2000 after providers raised concerns. GAO found that Medicare's method for establishing drug payments is flawed. Medicare pays 95 percent of the average wholesale price (AWP), which, despite its name, may be neither an average nor what wholesalers charge. It is a price that manufacturers derive using their own criteria; there are no requirements or conventions that AWP reflect the price of any actual sale of drugs by a manufacturer. Manufacturers report AWPs to organizations that publish them in drug price compendia, and Medicare carriers that pay claims for part B drugs base providers' payments on the published AWPs. In 2001, widely available prices at which providers could purchase drugs were substantially below AWP, on which Medicare payments are based. For both physician-billed drugs and pharmacy supplier-billed drugs, Medicare payments often far exceeded widely available prices. Physicians and pharmacy suppliers contend that the excess payments for covered drugs are necessary to offset what they claim are inappropriately low or nonexistent Medicare payments for services related to these drugs. For delivery pharmacy supplier-billed drugs, Medicare's payment policies are uneven. Pharmacy suppliers billing Medicare receive a dispensing fee for one drug type--inhalation therapy drugs--but there are no similar payments for other DME-administered or oral drugs. Other payers and purchasers, such as health plans and the Department of Veterans Affairs, use different approaches to pay for or purchase drugs that may be instructive for Medicare. In general, they make use of the leverage from their volume and competition to secure better prices.
As we reported in 2009, more than 5 million third parties submitted more than 82 million miscellaneous income information forms (Form 1099- MISC) to the IRS reporting more than $6 trillion in payments for tax year 2006. Third-party payers are businesses, governmental units, and other organizations that make payments to other businesses or individuals. Payers must submit payment information on 1099-MISCs to IRS when they make a variety of payments labeled miscellaneous income. Payees, or those being compensated, are required to report the payments on their income tax returns. The types of payments reportable on a Form 1099-MISC—shown in figure 1—and their reporting thresholds vary widely. Under existing law, information reporting is required for payments by persons engaged in a trade or business to nonemployees for services of $600 or more (called nonemployee compensation), royalty payments of $10 or more, and medical and health care payments made to physicians or other suppliers (including payments by insurers) of $600 or more. However, personal payments, such as a payment by a homeowner to a contractor to paint his or her personal residence, are not required to be reported because these payments are not made in the course of a payer’s trade or business. Existing regulations also exempt certain payments to a corporation, payments for merchandise, wages paid to employees, and payments of rent to real estate agents. The expansion of information reporting to payments to corporations and for merchandise will apply to payments payments made after December 31, 2011. made after December 31, 2011. Payers must provide 1099-MISC statements to payees by the end of January. Payers submitting fewer than 250 1099-MISCs may submit paper forms, which are due to IRS by the end of February. Payers submitting paper 1099-MISCs are required to use IRS’s official forms or substitute forms with special red ink readable by IRS’s scanning equipment. Photocopies and copies of the 1099-MISC form downloaded from the Internet or generated from software packages in black ink do not conform to IRS processing specifications. Payers submitting 250 or more 1099- MISCs are required by IRS to submit the forms electronically. Most 1099- MISCs for tax year 2006 were submitted electronically. However, most payers submitted small numbers of 1099-MISCs, and most payers submitted paper 1099-MISCs. By matching 1099-MISCs received from payers with what payees report on their tax returns, IRS can detect underreporting of income including failure to file a tax return. Figure 2 shows the automated process IRS uses to detect mismatches between nonemployee compensation and other payments reported on 1099-MISCs and payees’ income tax returns. The Nonfiler program handles cases where no income tax return was filed by a 1099-MISC payee. The Automated Underreporter (AUR) program handles cases where a payee filed a tax return but underreported 1099-MISC payments. AUR’s case inventory includes payee mismatches over a certain threshold, and IRS has a methodology using historical data to select cases for review. AUR reviewers manually screen the selected cases to determine whether the discrepancy can be resolved without taxpayer contact. For the remaining cases selected, IRS sends notices asking the payee to explain discrepancies or pay any additional taxes assessed. Third-party information reporting is widely acknowledged to increase voluntary tax compliance in part because taxpayers know that IRS is aware of their income. As shown in figure 3, voluntary reporting compliance is substantially higher for income subject to withholding or information reporting than for other income. For example, for wages and salaries, which are subject to withholding and substantial information reporting, taxpayers have consistently misreported an estimated 1 percent of their income. For income with little or no information reporting, the tax year 2001 estimated percentage was about 54 percent. IRS has long recognized that if payments made to businesses are not reported on 1099- MISCs, it is less likely that they will be reported on payee tax returns. In a 2007 report we highlighted the connection between a lack of information reporting and the contribution of sole proprietors, a significant portion of the small business community, to the tax gap. IRS estimated the gross tax gap—the difference between what taxpayers actually paid and what they should have paid on a timely basis—to be $345 billion for tax year 2001, the most recent estimate made. IRS also estimated that it will collect $55 billion, leaving a net tax gap of $290 billion. IRS estimated that a large portion of the gross tax gap, $197 billion, was caused by the underreporting of income on individual tax returns. Of this, IRS estimated that $68 billion was caused by sole proprietors underreporting their net business income. The $68 billion does not include other sole proprietor contributions to the tax gap, including not paying because of failing to file a tax return, underpaying the tax due on income that was correctly reported, and underpaying employment taxes. Nor does it include tax noncompliance by other types of businesses such as partnerships and S corporations. In the report, we noted that a key reason for this noncompliance was that sole proprietors were not subject to tax withholding, and only a portion of their net business income was reported to IRS by third parties. Tax noncompliance by some small businesses is unfair to businesses and other taxpayers that pay their taxes—tax rates must be higher to collect the same amount of revenue. The 1099-MISCs are a powerful tool through which IRS can encourage voluntary compliance by payees and detect underreported income of payees that do not voluntarily comply. Increasing the numbers of 1099- MISCs IRS receives from payers in turn would increase information available for use in IRS’s automated matching programs to detect tax underreporting, including failure to file a tax return. For tax year 2004 (the last full year available for our 2009 report), the AUR program assessed $972 million in additional taxes for payee underreporting detected using 1099-MISC information. To help IRS improve its use of 1099-MISC information, we recommended in 2009 that IRS collect data to help refine its matching process and select the most productive cases for review. In response to our recommendation, IRS reviewed a sample of AUR cases and plans to modify its tax year 2010 matching criteria for 1099-MISC information. Information reporting has allowed IRS to use its computerized matching programs as an alternative to audits to address some issues. The matching programs generally require less contact with taxpayers and thus are less intrusive and involve less taxpayer time. In addition, information reporting may reduce taxpayers’ costs of preparing their tax returns. In a 2006 report we described how additional information reporting on the basis of securities transactions could reduce taxpayers’ need to track the basis of securities they sold. The extent to which 1099-MISC reporting reduces taxpayer recordkeeping costs is not known, but to the extent it reduces the need to track receipts by year from each payer it could have some effect on those costs. IRS does not know the magnitude of 1099-MISC payer noncompliance or the characteristics of payers that fail to comply with the reporting requirements. Without an estimate of payer noncompliance, IRS has no way of determining to what extent 1099-MISC payer noncompliance creates a window of opportunity for payees to underreport their business income and go undetected by IRS. Research would be key for IRS in developing a cost-effective strategy to identify payers that never submit 1099-MISCs. In 2009, we recommended that IRS study the extent of 1099- MISC payer noncompliance and its contribution to the tax gap, as well as the nature and characteristics of those payers who do not comply. In response to our recommendations, IRS plans to study payer noncompliance through its National Research Program studies with results estimated to be available in December 2015. Existing information reporting requirements impose costs on the third- party businesses required to file Form 1099-MISC. The expanded reporting requirements will impose new costs. To comply with information reporting requirements, third parties incur costs internally or pay external parties. In-house costs may involve additional recordkeeping costs beyond normal recordkeeping costs related to running a business, as well as the costs of preparing and filing the information returns themselves. If the third parties go outside their organizations for help, they would incur out- of-pocket costs to buy software or pay for others to prepare and file their returns. Data on the magnitude of these information reporting costs are not readily available because taxpayers generally do not keep records of the time and money spent complying with the tax system. A major difficulty in measuring tax compliance costs, including the costs of filing information returns, is disentangling accounting and recordkeeping costs due to taxes from the costs that would have been incurred in the absence of the federal tax system. Data on compliance costs are typically collected by contacting a sample of taxpayers, through surveys or interviews, and asking them for their best recollection of the total time and money they spent on particular compliance activities. The quality of the resulting data depends on the ability of taxpayers to accurately recall the amount of time and money they spent. In the nine case studies we conducted in 2007, filers of information returns told us that existing information return costs, both in-house and for external payments, were relatively low. While these nine case studies are not to be generalized to the entire population, they do provide examples of costs and insights from the perspective of organizations of different sizes and from different industries and of organizations filing their own information returns and those filing on behalf of others. In-house compliance costs include the costs of getting taxpayer identification numbers (TIN), buying software, tracking reportable payments, filing returns with IRS, and mailing copies to taxpayers. One organization with employees numbering in the low thousands estimated that its costs of preparing and filing a couple hundred Forms 1099, which include recordkeeping and distinguishing goods from services, were a minimal addition to its normal business costs. One small business employing under five people told us of possibly spending 3 to 5 hours per year filing Form 1099 information returns manually, using an accounting package to gather the information. An organization with more than 10,000 employees estimated spending less than .005 percent of its yearly staff time on preparing and filing Forms 1099, including recordkeeping. Unit prices for services provided to payers by selected software vendors, service bureaus, and return preparers decreased as the number of forms handled increased. Two external parties selling services reported prices for preparing and filing Forms 1099 with IRS of about $10 per form for 5 forms to about $2 per form for 100 forms, with one of them charging about $0.80 per form for 100,000 forms. These prices do not include the payers’ recordkeeping costs. This relationship of price to size for entities we studied is consistent with what studies that we have seen show about the role of fixed costs and economies of scale in complying with the tax code; we are familiar with no similar studies of information returns. Although our case study organizations indicated that 1099 recordkeeping and reporting costs are relatively low, costs may not be as low as they could be. According to IRS, advisory group members, and others we interviewed for our 2009 report, payers are confronted with a variety of impediments to preparing and submitting 1099-MISC forms. Some payers that do not submit their 1099-MISCs as required may be unaware of their 1099-MISC reporting responsibilities. Other payers may be confused about whether payments are reportable because of different dollar reporting thresholds and the general exemption for payments to corporations under current law. Some payers misreport or neglect to report payee taxpayer identification numbers (TIN) and could be subject to penalty and required to do backup withholding on 1099-MISC payments to payees with bad TINs. For the large number of payers each submitting a few 1099-MISCs, IRS does not offer a fillable form on its Web site and requires payers to submit scannable red ink forms, but some payers submit black and white 1099-MISCs anyway. Although businesses will face additional costs for each additional Form 1099, some options for modifying the 1099-MISC reporting requirements could help mitigate the burden and promote payer reporting compliance. Table 1 highlights options we previously reported. We noted those options that were proposed by IRS, IRS advisory groups, and the National Taxpayer Advocate. Our list of 1099-MISC impediments and options is not exhaustive, nor is the list of pros and cons associated with the options. Improved IRS guidance and education are relatively low-cost options, but most taxpayers use either tax preparers or tax software to prepare their tax returns and may not read IRS instructions and guidance. While taxpayer service options may improve compliance for those that are inadvertently noncompliant, they are not likely to affect those that are intentionally noncompliant. Some options to change 1099-MISC reporting requirements require congressional action, and other options would be costly for IRS to implement. Where the option involves particular issues, such as cost or taxpayer burden, we note them in our table. As we reported in 2009, multiple approaches could help IRS to mitigate the reporting costs and promote payer compliance with 1099-MISC reporting requirements. For example, the evidence shows that the benefits outweigh the costs for information reporting for payments to corporations. For other options, it is not clear whether the benefits outweigh the associated costs, and additional research by IRS could help to evaluate the feasibility of more costly options, such as allowing black and white paper 1099-MISCs. Action to move forward on options to target outreach to specific payer groups or clarify guidance to reduce common reporting mistakes would hinge on IRS first conducting research to understand the magnitude of and reasons for payer noncompliance. In 2009, we recommended two actions that IRS could take to help payers understand their 1099-MISC reporting responsibilities: Provide payers with a chart to identify reportable payments. IRS disagreed with our recommendation and stated that the Form 1099-MISC instructions already list which payments are reportable and explain the rules for specific payment types. We believe that a chart would provide taxpayers with a quick guide for navigating the Form 1099-MISC instructions, already eight pages long under the current reporting requirements. Evaluate adding a new checkbox on business tax returns for payers to attest whether they submitted their 1099-MISCs as required. IRS also disagreed with this recommendation and stated that a similar question was removed from the corporate tax return after the Paperwork Reduction Act of 1980 was enacted. We believe results from the evaluation we recommend would be useful in weighing the benefits and burdens associated with a checkbox option. To reduce the submission burden facing many payers submitting small numbers of 1099-MISCs, we also recommended that IRS evaluate the cost- effectiveness of eliminating or relaxing the red ink requirement to allow payers to submit computer-generated black and white 1099-MISCs. In April 2009, IRS conducted a test to determine the labor to process a sample of 4,027 red-ink 1099-MISCs versus the same documents photocopied. IRS told us that, using the same scanning equipment and employees, the red-ink sample took 2 hours and 9 minutes to process versus 28 hours and 44 minutes to process and manually key the photocopy sample. Based on the test results, IRS decided to maintain the red ink requirement to minimize labor costs. We have not reviewed the results of the IRS test. Our prior work did not assess requiring 1099-MISC reporting on payments for goods. Some of our findings and recommendations may be relevant, but we do not know the extent of relevance. Madam Chair, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this statement, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Amy Bowser, Bertha Dong, Lawrence Korb, MaryLynn Sergent, and Cheri Truett. Tax Gap: IRS Could Do More to Promote Compliance by Third Parties with Miscellaneous Income Reporting Requirements. GAO-09-238. Washington, D.C.: January 28, 2009. Tax Gap: Actions That Could Improve Rental Real Estate Reporting Compliance. GAO-08-956. Washington, D.C.: August 28, 2008. Highlights of the Joint Forum on Tax Compliance: Options for Improvement and Their Budgetary Potential. GAO-08-703SP. Washington, D.C.: June 2008. Tax Administration: Costs and Uses of Third-Party Information Returns. GAO-08-266. Washington, D.C.: November 20, 2007. Business Tax Reform: Simplification and Increased Uniformity of Taxation Would Yield Benefits. GAO-06-1113T. Washington, D.C.: September 20, 2006. Capital Gains Tax Gap: Requiring Brokers to Report Securities Cost Basis Would Improve Compliance if Related Challenges Are Addressed. GAO-06-603. Washington, D.C.: June 13, 2006. Tax Policy: Summary of Estimates of the Costs of the Federal Tax System. GAO-05-878. Washington, D.C.: August 26, 2005. Tax Administration: IRS Should Continue to Expand Reporting on Its Enforcement Efforts. GAO-03-378. Washington, D.C.: January 31, 2003. Tax Administration: Benefits of a Corporate Document Matching Program Exceed the Costs. GAO/GGD-91-118. Washington, D.C.: September 27, 1991. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Third parties, often businesses, reported more than $6 trillion in miscellaneous income payments to the Internal Revenue Service (IRS) in tax year 2006 on Form 1099-MISC. Payees are to report this income on their tax returns. It has been long known that if these payments are not reported on 1099-MISCs, it is less likely that they will be reported on payee tax returns. In 2010, the reporting requirements were expanded to cover payments for goods and payments to corporations, both previously exempt, beginning in 2012. This testimony summarizes recent GAO reports and provides information on (1) benefits of the current requirements in terms of improved compliance by taxpayers and reduced taxpayer recordkeeping, (2) costs to the third-party businesses of the current 1099-MISC reporting requirement, and (3) options for mitigating the reporting burden for third-party businesses. GAO has not assessed the expansion of 1099-MISC reporting to payments for goods. Information reporting is a powerful tool for encouraging voluntary compliance by payees and helping IRS detect underreported income. Also, information reporting may sometimes reduce taxpayers' costs of preparing their tax returns, although by how much is not known. IRS estimated that $68 billion of the annual $345 billion gross tax gap for 2001, the most current available estimate, was caused by sole proprietors underreporting their net business income. A key reason for this noncompliance was that sole proprietors were not subject to tax withholding and only a portion of their net business income was reported to IRS by third parties. The benefits from information reporting are affected by payers' compliance with reporting requirements and IRS's ability to use the information in its process that matches third-party data with tax returns. However, IRS does not have estimates of the number or characteristics of payers that fail to submit 1099-MISCs as required. To improve its use of 1099-MISC information, IRS has collected data to help identify ways to refine its matching process and select the most productive cases for review, as GAO recommended in 2009. Current 1099-MISC requirements impose costs on the third parties required to file them. The magnitude of these costs is not easily estimated because payers generally do not track these costs separate from other accounting costs. In nongeneralizable case studies conducted in 2007 with four payers and five vendors that file information returns on behalf of their clients, GAO was told that existing information return costs were relatively low. One small business employing under five people told GAO of possibly spending 3 to 5 hours per year filing Form 1099 information returns manually, using an accounting package to gather the information. Two vendors reported prices for preparing and filing Forms 1099 of about $10 per form for 5 forms to about $2 per form for 100 forms, with one charging about $0.80 per form for 100,000 forms. However, these prices did not include clients' recordkeeping costs. Payers face a variety of impediments preparing and submitting 1099-MISC forms, including complex rules and an inconvenient submission process. For example, payers must determine whether payees are incorporated, must get the payees' taxpayer identification number, and must use special forms if filing on paper. A variety of options exist for mitigating the costs of filing Form 1099-MISC. Most have pros and cons. IRS has already exempted payments, including those paid by credit card, which will be reported to IRS by other means. Other options include improving IRS guidance and education; adding a check-the-box question to business tax forms that would force return preparers to ask their clients whether they have complied with 1099-MISC reporting requirements; waiving late submission penalties for first-time payers; raising the payment reporting threshold; initially limiting the types of payments covered; having IRS develop an online filing capability; and allowing paper filers to submit computer-generated black and white 1099-MISCs rather than IRS's printed forms. GAO is not making new recommendations in this testimony. In 2009, GAO suggested that Congress consider requiring payers to report service payments to corporations. GAO did not study reporting of payments for goods. Other prior GAO recommendations included ways for IRS to improve its use of 1099-MISC information received. IRS agreed with six of eight recommendations and is taking action to address them.
Inland ports along the section of the Mississippi River between St. Louis, Missouri, and Baton Rouge, Louisiana, provide “on and off ramps” for shippers using the river, such as agricultural or chemical-processing companies, that need to move a large amount of bulk commodities. The 13 ports selected for our review vary in size, ranging from the Port of Memphis, Tennessee, the 5th largest inland port in the United States, to small ports, such as the Port of Osceola, Arkansas, that may serve one or two companies. Inland ports may be located on the banks of the river, or in harbors that are located off the main channel of the river. See figure 1 for the locations of inland ports on the Mississippi River, including the 13 ports we selected for this review (the starred ports in the figure). As shown in figure 2, a number of entities are involved in moving commodities through ports. Shippers may have facilities, such as grain silos, inside the port, or they may be located offsite and simply use the port to receive or ship commodities. Shippers enter into contracts with barge companies to move commodities along the river. If the shipper is sending cargo, then a barge company will drop off empty barges that the shippers load. The barge company then picks up the loaded barges, lashes them to a flotilla or “tow” (a number of barges or vessels), and transports the barges along the river to their destination. Within the port, a harbor services/fleeting company will move individual barges to docks within the port for loading or unloading, use “fleeting areas” along the sides of the harbor to store barges waiting to be moved, and take the barge back out to the river when it is ready to be added to a tow. On the landside of the port, trucks and trains deliver or pick up commodities, and a variety of port tenants, such as grain and fertilizer companies, have on- site facilities to store and move freight. The Mississippi River carries a large amount of sediment, which travels downstream and can accumulate in various spots (shoaling) within the river’s main channel and harbors. If the shoaling is too high or the river level drops, these spots can become impassable for fully loaded barges. See figure 3 for an example of shoaling at the mouth of a harbor. To help maintain navigable waters, some inland harbors require dredging. A vessel called a “dredge” removes sediment from the bottom of the harbor and deposits it elsewhere. Dredging needs vary among ports. For example, industry and port officials told us that harbors located off the main stem of the river provide port and tenant infrastructure some protection from the river’s current and large debris in the river, but these harbors also tend to accumulate more sediment, particularly at the mouth, or entrance, of the harbor. Finally, flooding events can deposit large amounts of sediment in the channels and harbors, which becomes more problematic as water levels fall. The Corps is responsible for dredging the nation’s federally authorized inland waterways, harbors, and channels, which are those that Congress defined in statute as federal projects and approved their construction and maintenance by the Corps to certain dimensions (depth, width, and length). To maintain the harbors and channels, the Corps may hire contractors or use its own vessels to dredge the harbors. The Corps does not dredge outside of the federally authorized areas, but ports and their tenants may dredge around their private docks and in other areas not maintained by the Corps. Dredging is part of the Corps’ Civil Works navigation mission, which includes the provision of safe, reliable, efficient, effective, and environmentally sustainable waterborne transportation systems for the movement of commerce, national security needs, and recreation in the United States. The Corps is also responsible for the operation and maintenance of locks and dams, as well as a number of other missions, such as flood risk management and hydropower. The Corps is organized into three tiers: a national headquarters in Washington, D.C.; 8 regional divisions; and 38 Civil Works districts nationwide. District offices are generally responsible for managing dredging projects located within their district boundaries, including planning, awarding, and administering maintenance-dredging contracts with industry. Regional oversight is provided through the division. All three tiers are involved in the budget development process. For example, districts will compile a list of funding requirements for work packages in the districts (for example, dredging an inland harbor). These work packages are ranked and reviewed by the division and headquarters, and the approved packages become the basis for the President’s Budget proposal for the Corps’ Civil Works program. The Corp’s fiscal year appropriation, as passed by Congress, may provide more or less funding than what was requested in the President’s Budget proposal. The federal government uses a variety of methods to fund transportation networks. The Corps pays the dredging costs for federally authorized harbors and channels with funds appropriated by Congress and generally reimbursed from the Harbor Maintenance Trust Fund. The trust fund is supported through collections of the Harbor Maintenance Tax, which is a tax collected on imports, domestic shipments, Foreign-Trade Zone admissions, and passengers primarily at coastal ports. The annual cost to fully dredge the harbors at each of the 13 selected inland ports varies by harbor, with one harbor requiring about $300,000 to be fully dredged, and another requiring over $3 million (although this could also change each year, based on flows from the Mississippi River and the conditions of each harbor). Prior to 2010, Congress used line-item appropriations to provide dredging funds for the harbors of specific ports. In contrast to the Harbor Maintenance Tax—which is paid by shippers primarily using coastal ports (and thus, is not directly linked to use of the inland ports)—the maintenance of other transportation networks, such as highways, is paid by users through a fee or tax. In addition, state and local governments are required to match federal funds for transportation infrastructure, such as highways and landside infrastructure at ports. From 2010 through 2015, the 13 selected ports we reviewed moved the following types of freight: agricultural commodities (primarily soybeans, corn, and rice); petroleum products; crude materials (sand, gravel, and similar materials); chemicals; coal; and primary manufactured goods (such as lime and concrete). As shown in figure 4, the bulk of the freight tonnage moved through these ports was composed of agricultural commodities, petroleum products, and crude materials. With respect to the contribution of the selected ports to the total tonnage moved on the Mississippi River, the 13 ports included in our review represented 15 percent of all agricultural freight; 9 percent of all crude materials; 8 percent of all primary manufactured goods; and 9 percent of all tonnage moved on the river from 2010 through 2015. The vast majority (99 percent) of the agricultural freight departing from the selected ports went downriver to deep-draft coastal ports primarily used for export purposes, such as Baton Rouge, South Louisiana, New Orleans, and Houston. However, individual ports varied with respect to the type of freight moved through the port, with some ports specializing in certain commodities. For example, as shown in table 1, eight ports primarily transported agricultural commodities from 2010 through 2015, and the remaining five ports transported a range of types of commodities. The industries located in a port’s geographic area tend to influence the products handled by that port. For example, port stakeholders told us and our review of Corps data confirmed that a number of ports primarily serve the local farming industry by shipping out agricultural commodities and bringing in fertilizer through the port. In addition, through site visits and document reviews, we found that the Port of Southeast Missouri has a lead facility in its region and a substantial amount of the freight moved through that port is lead concentrate, classified as crude materials. As shown in table 2, the amount of tonnage transported through individual ports can fluctuate significantly from year to year; however, the 13 selected ports fell into three broad groups. For the purposes of this report, we will describe these groups in relation to the Corps’ definition of low-, moderate-, and high-use ports, which is based on the 5-year average of annual tonnage transported through the port. While tonnage fluctuated annually, based on the 5-year averages, 6 ports transported less than 1- million tons; 6 ports transported 1-million tons to less than 10-million tons; and 1 port consistently transported over 10-million tons. The total annual amount of freight transported through the 13 selected ports fluctuated substantially at the individual selected ports, making it difficult to identify a consistent trend (see table 2). None of the 13 selected ports consistently experienced a year-to-year increase or decrease throughout the 6-year time period. Stakeholders we interviewed told us that some of the fluctuations in the total amount of tonnage moved at individual ports is due to increases and declines in specific commodities handled by the port. For example, at the Port of Pemiscot County, total tons of freight increased by about 465 percent from 2012 to 2013, because of a large spike in crude petroleum shipments. The port’s total tonnage declined in later years as those shipments declined. According to stakeholders, changes in individual commodities are also sometimes related to changes in export market demand or crop yields, or situations in which freight movement is impeded by harbor conditions (discussed later in the report). For example, stakeholders told us that if export market conditions improve for American agricultural commodities due to a drought in another country, farmers may sell more of their product, as opposed to storing it when prices are low. In addition, crop yield per acre determines the amount of crop harvested, and can be affected by weather, seed quality, and other factors. Stakeholders also told us that individual businesses decide where and by which mode to transport their commodities based on many different long-term and short- term factors, such as transportation time and cost, and market demand, among others. A majority of the stakeholders we interviewed, as well as officials from USDA and the Corps, cited funding constraints as a challenge that prevents the Corps from fully dredging all inland harbors, including the harbors at the selected ports. Port stakeholders told us that their harbors generally need annual dredging, particularly at the entrance to the harbor, where sediment flowing down the river tends to accumulate. Corps officials in one district agreed that the dredging needs for the ports are fairly consistent, although weather events and river levels can affect the amount of dredging needed. According to Corps officials, the Corps has dredged most of the 13 selected ports’ harbors most years from 2010 through 2016 (see table 3). However, port authorities we interviewed and Corps officials noted that the Corps does not dredge all of the harbors to their authorized dimensions (length, width, or depth) primarily due to funding constraints. According to local Corps officials, the Corps needed approximately $20.6 million, and received approximately $13.1 million to dredge all of the harbors and channels associated with the 13 selected ports to their full dimensions in fiscal year 2016. While Congress provided much more than this for the Corps to address operations and maintenance needs, the Corps must allocate operation and maintenance funds among hundreds of harbors and waterway projects. Nonetheless, according to Corps officials in one district, the Corps has been able to distribute the funds so that it can dredge enough of each harbor in that district to keep barges moving. Some stakeholders echoed this sentiment, stating that the Corps does a good job working with the funds that it receives. As shown in table 3, the Corps has dredged most of the 13 selected ports’ harbors most years from 2010 through 2016. However, port stakeholders provided some examples of how unmet dredging needs have negatively affected freight movement at these ports, particularly with respect to limiting the amount of freight moved per barge or creating temporary harbor closures. Some of these stakeholders noted that these situations can further lead to increased transportation costs and freight congestion, which can have negative consequences for the industries reliant on these ports, particularly agricultural industries. Light-loading: Light-loading refers to situations in which shippers cannot load a barge to its full capacity (see figure 5). Shippers have to light load a barge when a harbor is experiencing shoaling because a fully loaded barge would not have enough clearance for the bottom of the barge to pass over the shoaled areas of the harbor. Light-loading may refer to loading a barge so that it sits anywhere from several inches to a few feet higher above the water; but stakeholders explained that every inch taken off the barge’s draft corresponds to about 15 to 18 fewer tons of cargo on the barge. Light loading was the negative effect most commonly cited by port stakeholders when discussing the effects of unmet dredging needs. Since light-loaded barges carry less cargo than fully loaded barges, shippers must use more barges to move the same amount of product, an outcome that may lead to increased transportation costs. For example, one shipper explained that its agreement with a barge company requires that the shipper pay as if the barge is carrying a certain amount of tonnage, regardless of the load size. Thus, during a period of light-loading, the shipper would have to pay the barge company to move seven barges, instead of six, and the shipper would be required to pay as if the barges were carrying full loads. In addition to increased transportation costs, stakeholders said that since light-loading requires the use of more barges to move the same amount of freight; barge shortages can occur if light-loading is widespread. One port stakeholder told us that during periods of light-loading, it takes more time to load the same amount of product onto barges (because of the need to use multiple barges and the time it takes to switch each barge out), which can lead to long lines of trucks waiting to unload their cargo at the port. Harbor closures: Port stakeholders provided examples in which their ports were shut down due to unmet dredging needs. Several stakeholders, as well as the Corps, cited the 2012 drought as being particularly problematic. Over the course of 15 months, the Mississippi River fluctuated from historic flood stages in 2011 to record lows in 2012, dropping over 50 feet in some places. A significant amount of sediment from the 2011 flood settled along the river and in harbors, and as the water level fell, numerous harbors along the river were shoaled in and needed dredging. In this case, the shoaling occurred during the harvest season, which is the busiest time of year for the agricultural ports. Representatives from two ports told us they were shut down for 2 to 3 months, with barges full of grain stuck inside the harbor. Stakeholders said that grain silos at ports filled because barges could not get out of the harbors and farmers were at risk of losing grain due to spoilage in the field. Corps officials told us that initially, four of the ports’ harbors in one district were closed during the 2012 drought; so the Corps worked to dredge two of those ports’ harbors so that agricultural shippers could move cargo from some ports. Harbor shutdowns led to increased costs as companies began trucking product to other ports. (One company estimated it lost $5 million.) Some companies said that the increased costs led to downward pressure on the prices paid to farmers for grain. Port shutdowns affected non-agricultural stakeholders as well. For example, one port representative said that the area surrounding the port ran out of gas and diesel fuel three times because temporary harbor closures made it difficult to bring in fuel by barge. The 2012 drought was an unusual event, but the experiences at ports during that time provide useful insight into the critical nature of dredging at inland harbors. Moreover, port and tenant representatives provided other examples in which ports were temporarily shut down for a few weeks to as much as a month in more typical years. For example, a tenant told us that due to harbor shoaling at its port, the company spent $98,000 to reroute 14 incoming barges to another location on the river, unload the cargo at that location, and truck the cargo into its port. Port and industry representatives explained that the increased transportation costs created by light-loading and harbor closures are of particular concern because the affected industries operate on very small profit margins. In particular, agricultural companies and trade associations noted that one of the main reasons that their exports can compete in the global market is because of their low transportation costs. Some industry representatives raised concerns about their ability to switch to shipping cargo by truck or rail, explaining that shipping by barge is far more economical. In addition, port stakeholders noted that funding constraints that limit the Corps’ ability to fully dredge their ports have led to increased costs for them. Although port stakeholders varied in their financial ability to pay for dredging, some stakeholders reported that they took their own steps to open their harbor by hiring a dredge or excavating part of the harbor themselves. For example, one port representative said that his port recently spent an additional $75,000 to further dredge the harbor because the Corps did not have enough funding to fully dredge it. In addition, another port used funds from other sources to pay to dredge its harbor in 2010, 2011, and 2013. Port and industry stakeholders also told us that the uncertainty related to the annual decisions made in the federal budget process and whether the Corps will have enough funding to dredge their harbors creates challenges in attracting tenants. The Corps and the ports are not sure of how much funding will be provided in a given year until Congress passes the Corps’ annual appropriation, as is the case with any activity funded through the annual federal budget process. Once the funding amounts and allocations are known, the Corps releases a work plan outlining which harbors will be dredged and the amount of funding allocated to each harbor. Port stakeholders stated that the funding uncertainty can affect their ability to attract tenants, which need clarity about the reliability of dredging when determining whether to spend millions of dollars to build facilities, such as grain silos, that will last decades at the port. One port provided an example in which a new tenant faced significantly increased costs because the harbor was not dredged and the tenant had to light- load its cargo. Corps officials and researchers echoed these concerns, noting the importance of reliable dredging when ports are attempting to attract new tenants. According to Corps officials, within current funding levels, the Corps must make decisions about which harbors to dredge, and the amount of dredging each harbor should receive. Based on interviews with Corps officials and our reviews of budget guidance documents, when assessing which projects to fund during the annual budget process, the Corps uses a risk-based matrix that considers condition versus consequence, and based on each value, assigns the project an overall risk score. With respect to dredging inland harbors, the anticipated condition is based on the expected level of shoaling, and the consequence is based on the average annual tonnage moved by the port over the past 5 years, and other factors, such as imminent life-safety impacts. Consequence is rated on a scale of 1 to 5, with 1 being the most severe. For example, ports that move less than 1-million tons of freight are ranked as “4,” or “low economic impact.” In interviews, Corps officials identified other factors that they consider when allocating funds for dredging, such as whether nearby ports will be dredged, but several Corps officials pointed to tonnage shipped as the primary factor they use to make dredging decisions. In addition, Corps officials noted that funding for low- commercial use ports—ports that on average ship less than 1-million tons per year—was reduced in the fiscal year 2012 budget and subsequent budgets, in response to a 2010 Memorandum from the Office of Management and Budget. Stakeholders, including representatives of smaller ports as well as a barge operator and industry associations that we spoke to, raised concerns about the Corps’ emphasis on tonnage and its effects on which ports are selected for dredging, with some stating that other factors should be considered (such as economic impact or cargo value). Some stakeholders and an expert stated that if ports do not receive dredging and barges moving through that harbor have to light-load or temporarily cannot move through the port, then industries may leave the port; the cost of dredging may increase as sediment builds, and the port may face more difficulties meeting the 1-million-ton threshold. Corps officials acknowledged stakeholders’ concerns related to the low-commercial use ports’ ability to compete in the prioritization process and stated that they have worked to request more funding for these ports since funding was cut in fiscal year 2012. However, a senior Corps official also noted that the inland ports make up a very small percentage of the Corps’ overall national navigation project portfolio and therefore competition for constrained resources is very keen. Congress has taken steps to try to address this issue, such as requiring the Corps to allocate a minimum amount of the funds to be reimbursed from the Harbor Maintenance Trust Fund on low tonnage ports. In addition, Congress has emphasized the importance of considering factors beyond tonnage. For example, when allocating funds from the Harbor Maintenance Trust Fund among eligible harbors and channels, the Corps is directed by statute not to base its allocation of funds solely on the amount of tonnage transiting through the harbors. In addition, in determining an equitable allocation, the Secretary of the Army is required to consider: the national and regional significance of harbor operations and maintenance; and a biennial assessment of the needs and uses of the harbors, which should include, to the extent practicable, the national, regional, and local benefits of such uses, including the use of harbors for: commercial navigation and the movement of goods; domestic and international trade; commercial fishing; subsistence; harbors of refuge; transportation of persons; domestic energy production; use by the Coast Guard or Navy; emergency response; recreation purposes; and other authorized uses. When providing appropriations for the Corps, Congress has also suggested the Corps consider issues beyond tonnage when allocating funds for dredging. Based on our reviews of budget guidance documents and interviews with Corps officials, the Corps does collect data on many of the factors identified by Congress in law and in the language accompanying the appropriations act. For example, the Corps collects information on whether the harbor is used for some of the purposes outlined in the statute (for example; commercial fishing, transportation of persons, whether the harbor is used by the Coast Guard); however, a senior Corps official noted that many of these factors are more applicable to the coastal harbors and channels and are not as applicable to the inland harbors. In addition, Corps officials told us they may note specific circumstances about the regional importance of a port when submitting a budget package to dredge its harbor (for example, if industries in the port lack access to other modes of transportation). However, Corps officials told us that due to funding limitations, they have not conducted the statutorily required assessments of the national and regional significance of the harbor operations and maintenance, or of the local, regional, and national benefits from the use of the harbors. A senior Corps official noted that the cost to do an in-depth economic analysis of a port may be equivalent to the cost of dredging some of these harbors, and the results of the economic analyses may not change which harbors are ultimately prioritized for dredging. However, the Corps has developed some internal tools that might help it assess data related to some of the factors that Congress has required the Corps to consider when allocating funds from the Harbor Maintenance Trust Fund, such as the national and regional significance of harbor operations and maintenance, and the use and benefit of the harbor for domestic trade. For example, a Corps official from the Corps’ Engineer Research and Development Center (ERDC) explained that ERDC developed a web-based “channel portfolio tool” that collates, summarizes, and visualizes detailed data from the Corps’ Waterborne Commerce Statistics Center to help district officials understand the direct role of dredging on the movement of cargo through coastal ports and the inland waterway system. The tool is scalable, meaning that users can view the data for the entire river system, for specific combinations of harbors, or for specific harbors. Corps officials using the tool can select specific harbors and quickly access annualized data on how many tons of various commodities moved through the location, the depths of loaded barges (which can be compared to present shoaling conditions), and the origin and destination of the cargo. Further, the official explained that the Corps has also used the tool to generate metrics on the amount and the dollar value of cargo at risk when harbors lose 5 feet of depth. The official further added that these metrics capture the cargo most at risk during periods of shoaling or low water conditions, thereby enabling objective comparisons across harbors. In addition, according to a Corps official, the Corps’ Institute for Water Resources has a tool that provides a detailed model that uses a variety of data about the coastal harbors, including their ship depths and cargo value, to better inform budgetary decisions. The official added that this tool potentially could be expanded to inland harbors. The tools described above suggest that the Corps has tools already available that may help it better assess the additional factors that Congress required it by statute to consider when allocating dredging funds. For example, information about vessel depths, barge traffic, cargo value, and destination used in the channel portfolio tool could help the Corps assess the needs, use, and significance of harbor operations and maintenance by demonstrating the effects from unmet dredging needs (e.g., the frequency and duration of light-loading and the estimated impact on shipping costs), and comparing the relative effects among inland ports. However, Corps officials told us that additional work would be needed to develop useful metrics for inland ports, since the existing analyses have focused on coastal ports. For example, as previously noted, one tool estimates impacts from a loss of 5 feet of draft at a deep-draft coastal harbor, but an official stated it would be rare for an inland harbor to lose that much depth. Furthermore, the value of using the existing tools in this new context would depend on the reliability and the costs of this new approach, which are currently unknown. As noted above, Corps officials stated that funding constraints have prevented them from conducting the statutorily required assessments of the significance of harbor operations and maintenance. However, we developed a framework for examining agencies’ efforts to manage declining resources, and a key sub-theme within that framework is the importance of consulting with Congress to consider how budget decisions align with congressional goals, constituent needs, and industry concerns. A senior Corps official agreed that it may be beneficial for the Corps to provide Congress with information on the extent to which the Corps’ existing tools could be adapted to allow it to consider factors beyond tonnage when allocating dredging funds, the limitations of using these tools, as well as the amount of additional resources that may be needed to pursue such an approach. Many stakeholders and experts we interviewed said that the federal government should make more use of the current mechanism for funding dredging—the Harbor Maintenance Trust Fund—before considering alternative-funding options. Stakeholders representing shippers, as well as a state department of transportation official stated that dredging inland harbors is in the national interest as it promotes U.S. exports and transports freight through coastal ports such as New Orleans and Baton Rouge. Stakeholders also noted that the fund has a balance that is available for such projects. However, the money from this fund is only available for these purposes if Congress makes an appropriation out of the Harbor Maintenance Trust Fund. Congress has taken steps to increase spending from the Harbor Maintenance Trust Fund; however, other factors may affect the use of the trust fund. For example, to balance competing priorities among government programs and meet budgetary spending caps, Congress may choose to appropriate more or less funding from a trust fund than requested by an agency. In addition, as we have previously reported, due to fiscal pressures imposed by the nation’s budget deficit, any decisions about the Harbor Maintenance Trust Fund would need to be considered within the context of all major federal spending and tax programs. We asked selected stakeholders and experts about three options for funding inland harbors’ dredging: contributions from state and local governments; expanding the use of the Inland Waterways Trust Fund (currently used for new construction and major rehabilitation of locks and dams as well as other channel and waterway improvements) to include maintenance dredging; and a new user fee or tax. Stakeholders and experts identified challenges, some of which apply to multiple options and some of which apply to specific options. Additional details on the challenges are below: Financial effects on users and local governments: Stakeholders raised concerns that a user fee or tax or a state and local contribution would negatively affect users and those governments. For example, stakeholders representing port tenants, shippers, and trade associations stated that a user fee could raise waterborne transportation costs and negatively affect shippers. State department of transportation officials stated that there could be a shift to alternative transportation modes if barge rates increased, which could lead to more congestion and surface degradation on roads. However, experts noted that alternative modes are more costly than water transportation, so any diversion to these modes would depend on the extent of the increase in water transportation costs. Stakeholders such as ports, port tenants, and state departments of transportation officials also stated that many of the selected ports in our review do not have the financial resources to provide a funding contribution, and that it may be difficult to secure state or local funds from rural, low income counties and states where a number of the inland ports are located. More generally, we have reported that state and local governments face long-term fiscal pressures, which may limit their ability to contribute to dredging costs for harbors in their jurisdiction. Impact on Inland Waterways Trust Fund: Stakeholders representing ports, port tenants, and state department of transportation officials stated that the Inland Waterways Trust Fund has a backlog of lock and dam projects that need funding, and any expanded use (absent an increase in the fuel tax) of the fund’s revenues on maintenance dredging would reduce available funds for locks and dams. In addition, port tenants, ports, and a state department of transportation official noted that directing funds to locks and dams, many of which are decades old and in need of repairs, may be a better use of funds than for dredging. For example, a port tenant noted that a lock failure would have more significant effects to more users than shoaling at one harbor. Alternative funding options may not result in more predictable funding for dredging: Stakeholders and a state department of transportation official stated that requiring a state or local contribution may not result in more consistent funding given state and local budget processes and priorities. A state department of transportation official noted that funds for dredging could compete with other local needs, such as schools. When discussing alternative options generally, a Corps official said that since there is an existing mechanism that collects funds that can be appropriated for dredging—the Harbor Maintenance Tax—other options may not be feasible. This official noted that any new funding option may impose administrative burdens that might outweigh additional revenue collection. The majority of experts and a number of stakeholders we spoke to identified potential benefits related to users directly paying for their infrastructure use and using state and local revenues for dredging instead of devising an entirely new funding mechanism. Users pay for infrastructure use: Some of the experts noted that many benefits of dredging inland harbors are local and that a state or local contribution from their budgets or a fee or tax paid by port users may be more appropriate than other funding options as those who benefit most from a project would pay for it. The Congressional Budget Office, Congressional Research Service, and the Transportation Research Board have also noted the benefits of maritime users paying more for their infrastructure use. State department of transportation officials as well as experts noted that a new user fee may be more appropriate than a tax as it would mean one is paying for their use rather than paying a general tax. However, experts and some of the state departments of transportation officials cautioned that any alternative funding option imposed on just one section of the inland waterways would likely raise equity concerns and could put those inland ports at a competitive disadvantage. Thus, they emphasized that any alternative funding option should be applied to all U.S. inland waterways and not just those in the scope of this report. Use of state and local revenues for dredging: Some stakeholders, including port tenants and shippers, believe that they already pay for dredging through state, local, and port taxes and fees. Some of these stakeholders gave reasons why a state or local contribution could be warranted. First, one port tenant, one state department of transportation official and two experts noted that state and local governments benefit from operating ports, which contribute to their economies. In addition, stakeholders representing port tenants and a state department of transportation agency, as well as one expert, noted that state and local governments and ports have provided funding for landside investments at ports, and it is therefore in their interest to maintain port access to the river. We have previously reported that investments being made in maritime infrastructure should be considered as part of state and national freight planning. In addition, some stakeholders noted that using state and local revenues to fund dredging could be an option if it could be used to match federal government funds. A Corps official noted that it is currently possible for non-federal entities to provide “contributed funds” to the Corps for dredging, but none has done so yet for this particular segment of the Mississippi River. However, Corps officials stated that they have received contributed funds for dredging in other regions, and as previously noted, some ports have paid for their own dredging in certain cases and port tenants are already financially responsible for dredging around their docks. Stakeholders representing shippers said that they might be more inclined to consider an alternative-funding option if the benefits of a particular option outweighed the costs. In addition, some stakeholders also said that they would be more willing to consider a funding option for emergency dredging as opposed to routine dredging and would be more willing if there were a cost-share with the federal government. The Mississippi River and its inland ports are important to the movement of freight, particularly agricultural goods destined for export. However, natural shoaling in many of these ports’ harbors negatively affect vessel operations, potentially resulting in freight congestion and increased shipper costs. The Corps, responsible for dredging these particular harbors as well as hundreds of other harbors and channels around the country, operates in a federally constrained budgetary environment and will likely continue to do so. It therefore must choose which harbors to dredge, with what frequency, and to what depth and width. In making these decisions, the Corps primarily relies on tonnage data—a potentially reasonable approach. The Corps is statutorily required to consider other factors such as the harbors’ national, regional, and local benefits when allocating funding for dredging inland harbors. Although the Corps has cited funding constraints as the reason for being unable to fulfill the statutory requirements, it has tools available that could potentially be adapted to help it consider all the factors Congress identified in statute and better inform its decisions regarding inland harbor dredging. However, some of these tools were developed for coastal harbors, and the feasibility, potential limitations, and costs of adapting the Corps’ existing analytical tools and capabilities will need to be assessed before these tools could be successfully utilized. We recommend the Assistant Secretary of the Army for Civil Works direct the Director of Civil Works to determine whether existing tools and capabilities (such as the Corps’ analyses and models related to inland harbors’ conditions and freight traffic, as well as shoaling effects at coastal ports) can be adapted to help evaluate other factors when allocating funds from the Harbor Maintenance Trust Fund. The Corps should report to Congress on the feasibility, limitations, and potential costs and on an estimate of any additional funds needed to use such an approach to meet the statutory requirements. We provided a draft of this report to the Department of Defense for review and comment. In comments, reproduced in appendix II, the Department of the Army, Office of the Assistant Secretary (Civil Works), stated that it concurred with the recommendation; with comment, and that it would work with the Corps to address the recommendation. The office also provided comments that focused on the recommendation in the broader context of the development of the Corps’ overall Civil Works budget, which we considered and incorporated as appropriate. In addition, the Department of the Army, Office of the Assistant Secretary (Civil Works) and the Corps of Engineers provided technical comments, which we considered and incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Flemings@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Joint Explanatory Statement accompanying the Consolidated Appropriations Act, 2016, contained a provision for us to study freight flows, dredging, and funding of dredging with respect to the harbors of inland shallow-draft ports on the Mississippi River between St. Louis and Baton Rouge. This report addresses three objectives: (1) what is known about the freight traffic (including types of freight and trends in traffic) of selected inland ports on the Mississippi River between St. Louis and Baton Rouge since 2010; (2) stakeholders’ views on any challenges that the current federal approach to funding dredging presents for inland ports and the reported effect on the movement of freight at these ports; and (3) stakeholders’ views on the potential benefits and challenges of using alternative options for funding dredging of inland harbors. The 13 selected ports included in this review are (listed in geographic order, southbound): Southeast Missouri Regional Port Authority, Missouri; Hickman-Fulton County Riverport Authority, Kentucky; New Madrid County Port Authority, Missouri; Pemiscot County Port Authority, Missouri; Osceola Port Terminal, Arkansas; International Port of Memphis, Tennessee; Helena-West Helena/Phillips County Port Authority, Arkansas; Port of Rosedale, Mississippi; Yellow Bend Port, Arkansas; Port of Greenville, Mississippi; Port of Lake Providence, Louisiana; Madison Parish Port, Louisiana; and Port of Vicksburg, Mississippi. To determine what is known about freight traffic of selected inland ports between St. Louis and Baton Rouge since 2010, we reviewed and analyzed data from the U.S. Army Corps of Engineers’ (Corps) Waterborne Commerce Statistics Center for the 13 inland ports included in our review. Specifically, we analyzed the types and amount of freight transported through these ports annually, as measured by weight, from 2010 through 2015. These data are referred to as annual tonnage data, and include total waterborne tonnage, whether the tonnage was moving into or out of the port, and the amount and types of commodities moved through the port. We analyzed these data to determine whether we could identify any trends in the movement of freight in these ports. To assess the reliability of the data, we reviewed a 2009 GAO report that discussed the reliability of Corps’ tonnage data and then interviewed Corps officials at the Waterborne Commerce Statistics Center about any changes that had occurred in the data collection, receipt, handling, and storage processes since that review, as well as their current processes for ensuring the reliability of the data. We also interviewed port officials to discuss any concerns they had about the data and companies responsible for filing the reports that the Corps uses to assess port tonnage, to discuss their methods for ensuring the accuracy of the data. We found the data to be reliable for the purposes of our review. To determine stakeholders’ opinions on whether the current federal- funding approach for dredging presents any challenges for inland ports and on the reported effects on freight movement at these ports, we interviewed port directors and in some cases port tenants at 11 of the 13 inland ports. We also conducted site visits at 7 of the 13 selected ports to interview port directors, harbor services companies, and tenants in person, and to gain an in-depth understanding of how shoaling can affect their harbors. To select the ports we interviewed and visited, we used information provided by the Corps on relevant federal dredging projects and the corresponding inland shallow-draft ports in this section of the river. Through initial research and interviews, we determined which factors may contribute to variations in ports’ dredging needs, extent of dredging received, and the effect of unmet dredging needs. Based on those factors, we selected ports for site visits and interviews to ensure diversity in total tonnage, the percentage of inbound and outbound freight traffic at the port, the types of commodities most frequently handled, geographic location (including which Corps District they were located in), the funding source for dredging, and prior dredging history, based on information provided by the Corps. In addition to interviewing port directors and tenants, we also conducted interviews with industry stakeholders such as barge companies, trade associations, and shippers as well as academic experts. We also interviewed officials at the United States Department of Agriculture (USDA) Agricultural Marketing Service’s Transportation Services Division to discuss their research on agricultural transportation. See tables 4 and 5 for a list of stakeholders and experts we interviewed. We selected industry and academic stakeholders based on a review of our prior reports on waterway transportation, as well as through recommendations from other interviewees. In addition, to understand how the Corps budgets and implements dredging activities and the role of the federal budget process, we reviewed relevant statutes, the Corps’ budget guidance documents, as well as prior President’s Budget requests and congressional appropriations, and interviewed Corps officials from the headquarters, division, and district offices. We reviewed statutes, regulations, and legislation to understand what factors Congress has directed the Corps to consider when allocating funds for dredging harbors. We also used prior frameworks developed by GAO to assess the Corps’ actions with respect to collecting and analyzing data to help inform its budgeting decisions. We received data from the Corps on the prior dredging history for each port, for 2010 through 2016. To determine the reliability of the dredging history data, we compared these data to documentation publicly available, such as Corps work plans that outline the dredging plan for each year, and cross-checked the data against what port stakeholders told us in terms of prior dredging activities. We followed up with Corps officials to discuss the data and obtain supplementary information as necessary to get the most complete, reliable information possible. Except where otherwise noted, we found the data sufficiently reliable for our purposes. To determine stakeholders’ opinions about the potential benefits and challenges of using alternative funding methods for dredging inland harbors, we identified funding options through a literature search and conducted 14 initial interviews with 11 stakeholders representing industry, including representatives of some of the ports we previously described, and 4 experts. We used these initial interviews to collect the stakeholders’ general views on potential alternative-funding options, as well as the benefits and challenges of those options. From these interviews and literature searches, we identified the three types of options that were most commonly discussed: a new user fee or tax, a state or local contribution, and expanding the use of the Inland Waterways Trust Fund for dredging. We then interviewed 33 stakeholders representing ports, tenants, shippers, barge companies, and state transportation agencies to collect their opinions on the benefits and challenges of each of the three types of options. We selected stakeholders to interview based on a review of related reports and suggestions from other interviewees, and we included port tenants and representatives from the ports we interviewed. In addition to these stakeholders, we interviewed five experts on their views of the benefits and challenges of the alternative funding options. The experts were identified through a literature search and our prior, related reports on inland waterways and surface transportation funding and financing. We selected these experts based on their knowledge of the inland waterways and/or infrastructure funding and judgmentally chose at least two individuals from academia and consulting firms. See tables 4 and 5 for a list of stakeholders and experts we interviewed With respect to research objectives 2 and 3, because we asked stakeholders for their opinions and did not conduct a survey in which every stakeholder could provide a response as to whether a certain issue was relevant for them, we do not enumerate responses in the report. Instead, we analyzed the responses and reported on common themes that arose in multiple interviews. In addition, considering the number of inland ports outside of this section of the river and the fact that we selected a non-generalizable sample of stakeholders, ports, tenants, and experts to discuss dredging issues and funding options related to the selected ports in this section of the river, the information cannot be used to make inferences about a population. However, the description of the Corps’ budget development process is representative of its process for all dredging projects. We conducted this performance audit from July 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Susan A. Fleming, (202) 512-2834, or Flemings@gao.gov. In addition to the contact named above, Sharon Silas (Assistant Director); Crystal Huggins (Analyst in Charge); Amy Abramowitz; Alexandra Edwards; Carol Henn; Alyssa Hundrup; Delwen Jones; Elke Kolodinski; Hannah Laufe; Maureen Luna-Long; SaraAnn Moessbauer; Joshua Ormond; and Cheryl Peterson made key contributions to this report.
Inland ports on the Mississippi River between St. Louis and Baton Rouge enable shippers to move millions of tons of agricultural and other bulk commodities. However, these ports' harbors can accumulate sediment that reduces their depth, width, and length, making it difficult for vessels to move. To address this, the Corps routinely dredges the harbors. Congress included a provision in statute for GAO to review dredging issues for ports in this region. This report addresses: (1) freight traffic of selected ports since 2010; (2) stakeholders' views on any challenges presented by the current federal funding approach to dredging inland harbors; and (3) the benefits and challenges of alternative options to fund dredging. GAO reviewed Corps' 2010–2015 port traffic data for 13 of 18 inland ports in the region. Data for 2015 were the most recent available. GAO also interviewed Corps officials, industry stakeholders, and officials from 11 of 13 ports selected to include a range in terms of cargo handled, location, and dredging history. GAO conducted a literature search and interviewed 52 industry, port, and other stakeholders and experts about alternative options to fund dredging. From 2010 through 2015, 13 Mississippi River ports that GAO selected for review varied individually in terms of the amount, type, and trends in traffic handled. As a group, these ports primarily moved a mix of agricultural commodities (corn, soybeans, and rice); petroleum products; and crude materials (such as sand and gravel, among others). However, the ports varied individually, with some primarily moving agricultural commodities, and others moving a variety of commodities. These ports also varied in the quantity of goods transported through them, ranging from less than 1-million tons to more than 10-million tons per year. The amount of freight moved through each port tended to fluctuate each year due to various factors, such as weather, crop yields, and export markets. A majority of the stakeholders GAO interviewed, as well as U.S. Army Corps of Engineers (Corps) officials, stated that funding constraints limit the Corps' ability to fully dredge the 13 ports' harbors, which can affect freight movement. According to local Corps officials, they received about $13.1 million of the $20.6 million needed to fully dredge the 13 ports' harbors in fiscal year 2016. Some stakeholders told GAO that smaller ports are negatively affected by the Corps' emphasis on the amount of cargo moved (measured in tons) when making decisions about which harbors to dredge. Congress has directed the Corps to consider harbors' significance and to conduct an assessment of harbors' use and benefits—considering factors beyond tonnage—to inform its allocation of dredging funds. Corps officials said they have not conducted such an assessment due to funding constraints, and raised concerns about the cost-effectiveness of conducting such assessments. However, the Corps has developed some tools that may help it assess inland harbors' significance, use, and benefits. For example, Corps officials explained that they have a tool that allows them to track the amount and type of cargo moving through harbors and to estimate the value of cargo at risk if a harbor loses depth. However, a Corps official noted the cargo-at-risk metric was based on deep coastal harbors and would need to be adapted for inland harbors. A senior Corps official agreed that it could be useful to inform Congress of the Corps' existing tools and capabilities and the resources needed to adapt these tools and capabilities to address the statutory requirements related to allocating dredging funds. Many of the stakeholders GAO interviewed said that before considering alternative-funding options, the federal government should make more use of the current mechanism for funding dredging: the Harbor Maintenance Trust Fund. With regard to three other potential options for funding dredging—user fees, state and local contributions, and use of the Inland Waterways Trust Fund (which currently funds new construction and major rehabilitation of locks and dams as well as other channel and waterway improvements)—stakeholders identified challenges to their use. In particular, they noted the financial effects of these options on users, state and local governments, and the Inland Waterways Trust Fund. However, some stakeholders identified benefits related to these options, such as benefits from industry paying user fees for its infrastructure use, and state and local governments contributing funds to meet the dredging needs of harbors in their jurisdiction. The Corps should inform Congress whether it can adapt its existing tools to address factors for allocating funds from the Harbor Maintenance Trust Fund, and the resources needed to do so. The agency concurred with the recommendation, with comment, and provided technical comments that were incorporated, as appropriate.
NMB is headed by a three-member board, with each member appointed by the President and confirmed by the Senate for a term of 3 years. Day-to-day administration of the agency is provided by NMB’s General Counsel within the Office of Legal Affairs and the Chief of Staff (see fig. 1). NMB does not have an office of inspector general to provide independent audit and investigative oversight. According to NMB, its overall mission is to provide for the independence of air and rail carriers and employees in matters of self-organization, avoid interruption to commerce conducted through the operation of those carriers, and administer statutory adjustment boards as well as develop complementary strategies to resolve disputes. To fulfill its mission, NMB has three program areas: Representation: Unions are selected for the purposes of collective bargaining through secret-ballot elections conducted by NMB. If there is a question concerning representation of a specific craft or class,NMB is charged with resolving the representation dispute through its Office of Legal Affairs, and has the sole jurisdiction to decide these disputes. Mediation and Alternative Dispute Resolution: The RLA provides mediation to help resolve disputes that can occur between management and labor during collective bargaining negotiations.When rail or air carriers and unions cannot reach agreement on the terms of a new or revised collective bargaining agreement—such as working conditions or rates of pay—either party can apply for NMB’s mediation services to resolve their differences or NMB may impose mediation if it finds that resolving the dispute is in the public’s interest. NMB also offers grievance mediation to parties as an alternative way to resolve disputes filed for grievance arbitration. Arbitration: The RLA also offers grievance arbitration to help resolve disagreements between carriers and unions over how to interpret and apply provisions of existing collective bargaining agreements. For example, employees may file grievances if they believe they were wrongfully fired or disciplined in violation of the agreement. If the carrier and the employee cannot resolve the grievance, the RLA permits either of these parties to refer the dispute to arbitration before an adjustment board. The adjustment board consists of a carrier representative, a union representative, and a neutral arbitrator provided by NMB. In this capacity, the arbitrator is called upon to break a tie. NMB does not directly provide arbitration services, but rather maintains a list of registered arbitrators from which the parties can select someone to review and decide their case. In the airline industry, the parties pay the costs of arbitration. In the railroad industry, however, consistent with the requirements of the RLA, NMB pays the fee and travel expenses of the arbitrator. NMB has made some progress in implementing each of the seven recommendations we made in December 2013. In our December 2013 report, we found that NMB lacked a formal strategic planning process and officials confirmed that they did not have a systematic mechanism for involving congressional and other stakeholders in this process. We concluded that without a robust process, NMB lacked assurance that its limited resources were effectively targeted toward its highest priorities. In this review, we found that NMB has implemented a strategic planning process but has not formalized it through written policies and procedures. In fiscal year 2014, NMB developed and published a strategic plan covering fiscal years 2014 through 2019, which we determined was largely consistent with OMB guidance on implementing the GPRA Modernization Act of 2010 (GPRAMA). NMB officials told us they used a strategic planning process soliciting input from staff in NMB program areas as well as from external stakeholders and Congress. Five of the seven external stakeholder groups that we interviewed said they commented on a draft of the strategic plan or discussed aspects of it during regular meetings with the agency, and all reported being satisfied with their overall communication with NMB. However, the agency has not developed a written policy or set of procedures outlining its strategic planning process. Federal internal control standards call for agencies to document the policies and procedures necessary to achieve their objectives, including strategic planning. Specifically, agencies should (1) establish policies and procedures to ensure that management directives are carried out and (2) appropriately document transactions and other significant events, and ensure that those records are properly managed, maintained, and available for examination. Further, through these policies and procedures, agency management can define responsibilities, assign key roles, and delegate authority. Meeting these requirements may be particularly important for NMB. NMB officials said there was little need for preparing written documentation of the strategic planning process, such as a standard operating procedure, because it was simple and would be easy to replicate in the future. Officials also said, because the agency is small, with 51 full-time positions, its staff frequently communicate informally, limiting the need for written procedures. However, three of NMB’s five senior managers, including the Chief of Staff and General Counsel, are eligible to retire, as are many other employees, increasing the risk of the agency losing institutional knowledge should they do so. Moreover, because the agency is small, some staff members have multiple responsibilities, increasing the magnitude of knowledge loss when an individual staff member leaves the agency. In our December 2013 report, we found that NMB was not meeting OMB guidance to implement GPRAMA requirements for annual performance planning and reporting. Specifically, the agency’s performance goals were not objective, quantifiable, and measurable—as required by GPRAMA— and did not have targets and a time period over which to measure performance—as recommended by OMB guidance implementing GPRAMA. Without meeting this federal guidance, we concluded that the agency was not positioned to track and publicly report progress or results in its program areas. In this review, we found that NMB has developed new performance goals. However, of the 19 goals in its fiscal year 2016 performance plan, one goal specified a target, another specified a timeframe; but none followed all elements of OMB guidance for implementing GPRAMA. Several NMB officials told us that it is difficult for the agency to design performance goals because some outcomes are out of its control, such as how long it takes parties to reach agreement through mediation. Many federal agencies also set measurable performance goals for outcomes that include external factors outside of their direct control. Our prior work has shown that there are a number of strategies that federal agencies can use to reduce the influence of external factors on agencies’ measures. OMB officials told us NMB could seek assistance from them to refine its performance goals or could partner with another agency that has a strong performance management department. GAO 2013 Recommendation: NMB should develop and implement a formal mechanism to ensure the prompt resolution of findings and recommendations by independent auditors, including clearly assigning responsibility for this follow-up to agency management. In our December 2013 report, we found that NMB was following most key practices for financial accountability and control, but had an outstanding significant deficiency from its fiscal year 2012 financial statement audit and did not have a mechanism for ensuring prompt resolution of audit findings. As a result, we concluded that some recommendations made by auditors to improve the agency’s internal controls or operations may not have been addressed. In this review, we found that while NMB in fiscal year 2015 resolved the significant deficiency identified in its 2012 financial statement audit, it does not have a formal mechanism to promptly resolve all audit findings consistent with federal internal control standards. Specifically, while NMB drafted a financial audit standard operating procedure in 2014, it does not cover the agency’s response to findings from non-financial audits. The agency has also not addressed two recommendations made in previous auditors’ management letters that accompanied NMB’s financial audit reports. One of those recommendations was made as a result of the fiscal year 2014 audit and the other recommendation was related to a discrepancy that has been unresolved since 1995. In its fiscal year 2015 response, NMB indicated that the only management official with knowledge of the long-term discrepancy had retired and the agency would work to resolve the discrepancy through its financial management system. In our December 2013 report, we found that NMB had not fully implemented key practices for information security and privacy. Without implementation of these key practices, we concluded that NMB had increased risks that the confidentiality, integrity, and availability of its information would be compromised and it had limited assurance that the personal information it collected and used was adequately protected. In this review, we found that NMB has fully transitioned its network infrastructure and records management system into a cloud computing environment as a result of federal initiatives aimed at improving, among other things, the federal government’s operational efficiencies and overall IT security posture. NMB also fully transitioned its financial systems to third-party service providers. Specifically, NMB relies on other agencies’ systems, such as the Department of the Interior, for payroll, personnel, and human resources services, and the Department of the Treasury’s Bureau of the Fiscal Service for a full range of accounting services, including hosting its financial management system. In addition, NMB has begun to take steps to improve its information security program. Specifically, NMB developed a policy for managing agency information, documents, and records. The agency has also drafted procedures for its new enterprise network that include provisions for access and identity control, configuration management, planning, contingency monitoring, and audits. Further, it has developed a procedure for handling cyber incidents. Finally, it has an agreement in place with the Bureau of the Fiscal Service to, among other things, conduct a security assessment of its enterprise network. However, NMB has not fully implemented most key information security and privacy practices. (For additional details, see appendix II.) For example, the agency does not have policies and procedures in place for its information security program, including those for the oversight of third- party providers—entities that provide or manage information systems that support NMB operations. In addition, NMB has not conducted the required assessments of its third-party providers to ensure their systems are in compliance with the Federal Information Security Modernization Act (FISMA) of 2014. FISMA requires federal agencies to develop, document, and implement an agency-wide information security program to protect the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Moreover, NMB has not assessed whether its third-party providers’ systems were in compliance with the Privacy Act of 1974 and E- Government Act of 2002, which describe, among other things, agency responsibilities with regard to protecting personally identifiable information. NMB officials said the agency is taking steps to address its remaining information security and privacy issues. For example, because the agency’s new enterprise network is now a cloud-based system, NMB plans to use the Federal Risk and Authorization Management Program (FedRAMP), to the extent possible, to guide the development of its agency-wide policies and procedures, including how it will oversee its third-party providers and ensure they are in compliance with FISMA. In addition, NMB reached out to OMB in September 2015, and NMB officials said they have tried reaching out to the Department of Homeland Security (DHS) to ensure NMB is doing what is required to meet annual FISMA reporting requirements. NMB officials said they have been receiving information on FISMA from OMB, but not from DHS. Further, NMB officials said they are working on drafting information security program and privacy policies and procedures. They said that finalizing the information security policies and procedures will assist the agency in completing all of its required reviews in the future. In our December 2013 report, we found that NMB’s human capital program was not guided by a strategic workforce plan. Without workforce planning, a key internal control, we concluded that agency management could not ensure that skill needs would be continually assessed and that the agency would be able to obtain and maintain a workforce with the skills necessary to achieve organizational goals. Without a plan, the agency could not monitor and evaluate the results of its workforce planning efforts, including whether those efforts contributed to the agency accomplishing its strategic goals. In this review, we found NMB in October 2014 completed a strategic workforce plan that at least partially addressed four of the five practices from our December 2013 recommendation (see table 2). While members of one of NMB’s advisory groups said they provided input on some of the agency’s workforce decisions, our prior work suggests that formally including stakeholders in the workforce planning process can help the agency develop ways to streamline processes and improve human capital strategies. In addition, NMB’s performance goals, including those related to human capital, do not meet federal guidance. Without performance goals that meet guidance or the inclusion of other monitoring and evaluation efforts in its workforce and succession plan, the agency is not positioned to measure the outcomes of its human capital strategies or evaluate whether these strategies helped it accomplish its goals. In our December 2013 report, we found that NMB was struggling to efficiently manage grievance cases in the rail industry and it lacked data on the types of grievances filed to more efficiently manage the process. As a result, we concluded that if NMB did not address this demand on its limited resources, it could face a growing backlog of arbitration cases. In this review, we found that NMB is collecting data on the type of grievances filed for arbitration in some, but not all cases. An NMB official said part of the reason the agency is not collecting complete data on grievance types is because it does not have access to all cases. NMB reviews all grievances filed for arbitration by either a railroad or union and then forwards them to one of three types of adjustment boards—the National Railroad Adjustment Board (NRAB), a Public Law Board, or a Special Board of Adjustment. NMB is able to track information on the type of grievances filed for arbitration by the Public Law Boards and Special Boards of Adjustment because NMB requires parties to code their grievance type when they file their request for these boards. Parties filing grievances with NRAB, however, are not required by NMB to code their grievance type, an NMB official said, because NRAB is an independent organization that sets its own procedures and NMB cannot require that grievance codes be included in requests for arbitration filed with NRAB. However, NMB may be able to obtain that information because NRAB officials told us they track information on grievance type and are willing to share this information with NMB. Even with data on types of grievances filed for all cases, it is not clear the extent to which NMB would analyze them to address the arbitration backlog. One program official said that NMB does not have a systematic way to identify cases that may be good candidates for some type of alternative to arbitration, such as grievance mediation, because staff cannot easily access case information in a way that makes it readily available for analysis. Analysis of these cases continues to be largely a manual process that takes significant staff resources, he said. According to an NMB information technologist, in the past, the agency has primarily relied on staff to sort and analyze these data. However, the agency’s new arbitration case management system, which it upgraded in November 2015, should be able to produce by spring 2016 standard electronic data reports that would facilitate this analysis. Until those reports are available, it appears NMB will have limited ability to analyze data that might help it reduce its arbitration backlog, which, according to NMB, continues to grow. NMB is following key procurement practices in 2 of 3 areas that our prior work on assessing the acquisition function at federal agencies identified as promoting agencies’ efficient, effective, and accountable procurement functions—organizational alignment and leadership; and knowledge and information management. NMB, however, has not developed policies and processes—a third area our prior work identified—that reflect its new procurement interagency agreement. After the retirement of its only contracting officer in January 2014, NMB entered into an interagency agreement with the Department of the Treasury’s Bureau of the Fiscal Service (Fiscal Service) for provision of certain procurement functions that NMB had previously handled in-house. In this new environment, NMB is continuing to align its procurement function with its mission and ensure adequate resources to meet its procurement needs, a key practice to facilitate efficient and effective management of acquisition activities. Its Office of Administration is at an organizational level comparable to other key mission offices, such as the Office of Mediation and Alternative Dispute Resolution and the Office of Arbitration. An NMB official told us that the agency also involves internal stakeholders in acquisition decisions, including determining procurement needs, reviewing existing contracts before automatically renewing them, and justifying purchase requests. In addition, the agency entered into the interagency agreement with the Fiscal Service in response to changes in its workforce (i.e., the retirement of its contracting officer). Further, to ensure NMB has a procurement workforce adequate to support the organization’s needs, two staff are being trained as contracting officers because not having a contracting officer is a risk to the agency, the NMB official said. In addition, six NMB staff were trained and certified as contracting officer representatives, who assist the contracting officer in providing administration of contract actions under the interagency agreement and evaluating performance. NMB is also following a second key practice by establishing knowledge and information tools to help it make well-informed procurement decisions. NMB now has access to electronic data on purchase requests from the procurement and financial management systems administered by Fiscal Service. Fiscal Service also provides NMB with monthly billing reconciliations and weekly updates on the status of its contracts. A NMB official said that the information received from Fiscal Service has helped the agency analyze and adjust its spending and, as a result, NMB has eliminated contracts for items it no longer needs, such as storage space, copiers, and periodical subscriptions. However, NMB is not following an element of a third key practice—to have policies and processes in place consistent with internal control standards and best practice. NMB has not developed written internal policies and processes that reflect its new interagency agreement procurement environment. The NMB procurement official said the agency does not have current policies and processes because some of the previous policies and procedures were lost in the transition which occurred when the previous contracting officer retired. As a result, the agency has had to recreate them, the official said. The agency’s fiscal year 2014 to 2019 strategic plan (as amended in fiscal year 2015) called for procurement processes to be updated by the end of fiscal year 2014 in light of the new procurement environment. Developing and implementing written procurement policies and processes that reflect NMB’s current procurement environment could help ensure its staff use consistent processes under this new environment. Since we made our recommendations in December 2013, NMB has taken several positive steps in response, such as developing strategic and workforce plans and closing a long-standing deficiency in a financial statement audit. However, additional actions are needed to fully respond to those recommendations. For example, NMB’s performance goals do not yet meet all federal guidance. As a result, the agency is not positioned to track and publicly report progress or results in its program areas. In the areas of strategic planning and information security and privacy, officials were not able to provide the written policies and procedures that guide their actions, consistent with standards for internal control. Without fully implementing these recommendations, NMB cannot ensure that its limited resources are effectively targeted toward its highest priorities. Moreover, it may be missing opportunities to improve performance and mitigate risks in its program and management areas. In addition, NMB has not developed written policies and procedures that reflect its new procurement environment under its interagency agreement. Without written policies and processes—as called for by internal control standards and best practice—NMB cannot ensure the use of consistent procurement processes. We continue to believe, as suggested in our December 2013 report, that Congress should consider authorizing an appropriate federal agency’s Office of Inspector General to provide independent audit and investigative oversight of NMB. We recommend that the Chairman of the National Mediation Board develop and implement written policies and processes to reflect the agency’s current procurement environment. We provided a draft of this report to the National Mediation Board (NMB) for comment. The agency provided written comments, which are reproduced in their entirety in appendix III. We also shared a draft with the Office of Management and Budget (OMB) and Office of Personnel Management (OPM). Neither agency provided comments. NMB commented that many of the areas we had concerns about are not under its direct control considering that NMB has entered into interagency agreements for certain services with the Department of the Interior and the Department of the Treasury’s Bureau of the Fiscal Service (Fiscal Service). We continue to believe, however, that NMB must retain ultimate control and responsibility for all its programs and data, regardless of which agencies are to perform the services. For example, with regard to our findings related to information security, NMB commented that it will develop standard operating procedures for reviewing audits conducted by its third-party providers, but that it does not have the resources to conduct its own audits of those contracted agencies. Under FISMA, however, NMB is responsible for developing, documenting, and implementing a security program to protect its information systems and data, including those managed by another agency, contractor, or other source. We believe that developing procedures for reviewing audits conducted by third-party providers will be a positive step toward ensuring that NMB is conducting this required oversight. NMB agreed with our recommendation to develop and implement policies and processes to reflect the agency’s current procurement environment and indicated it is taking steps to do so. NMB also commented that because it does not manage all of its own procurements, its policies and processes would be largely subordinate to those of the Fiscal Service. However, because NMB’s interagency agreement with the Fiscal Service for performance of certain procurement functions does not absolve NMB of its responsibility to develop policies and processes as called for by internal control standards and best practice, NMB must develop its own set of complementary policies and processes to ensure the agency meets its needs through efficient, effective, and accountable procurement functions. In terms of its response to audits, NMB commented that all program audit findings have been addressed and that there are no outstanding issues related to any audits. While we recognized in the report that NMB resolved a long-standing, significant deficiency in its fiscal year 2015 financial statement audit, we disagree that all audit issues have been resolved. NMB needs to develop policies and procedures to address findings from all audits, not solely those reported in the financial statement audit report. NMB also commented that some of our concerns appeared to be related to the agency’s failure to sufficiently document its processes and that this was not necessarily an indication of noncompliance with any particular requirement. We agree that NMB has made strides in certain areas, such as developing strategic and workforce plans, but its actions are not in accordance with federal guidance or federal internal control standards in some areas. For example, in the areas of strategic planning, information security and privacy, and procurement, officials were not able to provide the written policies and procedures that guide their actions. We continue to believe it is important for NMB to create this documentation to ensure future consistency and success in its management and program areas. Finally, NMB commented it had concerns that we continued to assert the agency did not adequately consult with its stakeholders even though we noted several times in the report that stakeholders told us they had input and were satisfied with their overall communication with the agency. Stakeholder groups did tell us they had good communication with NMB, particularly with regard to the agency’s strategic planning. However, in the development of its workforce plan, NMB officials said that they did not specifically solicit stakeholder input, and the agency’s workforce and succession plan does not address collecting and incorporating feedback from stakeholders. As our prior work suggests, it will be important for NMB to formally include stakeholders in this process in the future because involving stakeholders can help the agency develop ways to streamline processes and improve human capital strategies. We are sending copies of this report to the Chairman of NMB and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The FAA Modernization and Reform Act of 2012 included a provision for us to evaluate and audit the programs, operations, and activities of the National Mediation Board (NMB) every 2 years. Our first report was issued in December 2013. This is the second review of NMB and this report examines the extent to which NMB: 1. has implemented each of our December 2013 recommendations, and 2. has incorporated key procurement practices. To address our research objectives, we reviewed key NMB documents and compared those documents with relevant federal laws, regulations, guidance, and related leading practices identified in our previous work (see table 3). We interviewed NMB officials and current board members. In addition, we interviewed key stakeholders who were interviewed for the December 2013 report, among others. Specifically, we interviewed representatives from key rail and air management and labor groups including Airlines for America, National Railway Labor Conference, AFL-CIO Transportation Trades Department and affiliated rail and air unions, and the International Brotherhood of Teamsters. Further, we interviewed representatives from the National Association of Railroad Referees, an association representing railroad arbitrators; the Dunlop II Committee, an informal NMB advisory group; and the National Railroad Adjustment Board, which hears rail grievance arbitration cases. The results of these interviews are not generalizable to all NMB stakeholders. Finally, we interviewed officials at the Office of Management and Budget and the Office of Personnel Management to determine how these agencies provide oversight and guidance to NMB. In addition, we reviewed NMB procurement data. Specifically, we reviewed data provided by the Department of the Treasury’s Bureau of the Fiscal Service on NMB’s fiscal year 2014 and 2015 contract actions. We assessed the reliability of data from the Bureau of the Fiscal Service and NMB’s fiscal year 2013 through 2015 financial statement audit reports by interviewing knowledgeable officials and reviewing relevant documents. We determined that these data were sufficiently reliable for our purposes. We conducted this performance audit from April 2015 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Example of NMB’s status NMB has not conducted risk assessments of its new enterprise network and financial management systems. Partially following NMB developed a policy for managing agency information, documents, and records in May 2013. In addition, it drafted procedures for its new enterprise network that include provisions for access and identity control, configuration management, planning, contingency monitoring, and audits. However, NMB has not developed agency-wide policies and procedures that govern its information security program, including policies and procedures for the oversight of third-party providers. Partially following NMB has drafted a security plan for its new enterprise network dated May 2014. However, NMB has not developed a security plan for its new financial management systems. NMB stated its staff was provided security awareness training during 2015. However, NMB did not provide evidence to support that all employees and contractors had received the training. Partially following NMB conducted an initial review of its new enterprise network in May 2014. NMB stated its new financial management systems were reviewed in September 2014. However, NMB was unable to provide evidence to support the review. Partially following NMB has not established and documented a remedial action process for its information security control weaknesses. NMB has not formally documented and tracked its preliminary plan of actions for its new enterprise network and has not included required attributes, such as milestones and scheduled completion dates. In addition, NMB has not yet begun formally documenting and tracking any information security control weaknesses which have been identified through other reviews (e.g., GAO). Partially following NMB has developed a procedure for handling cyber incidents. However, there are no indicators of date, review and approval. In addition, the procedure does not include required actions, such as mitigating risks associated with incidents before substantial damage is done, and notifying and consulting with the federal information security incident center. NMB has not established and maintained up-to-date continuity of operations plans and procedures for its information systems. Specifically, its continuity of operations plan has not been updated since June 2011 and does not reflect the current information technology environment. NMB has designated its Assistant Chief of Staff as its senior agency official for privacy. Example of NMB’s status NMB does not have policies and procedures for privacy protections. NMB has not conducted a privacy impact assessment for its financial management systems, which contain personally identifiable information. NMB did not issue system of records notice for its financial management systems. Cindy Brown Barnes, (202) 512-7215, brownbarnesc@gao.gov. In addition to the contact named above, Clarita Mrena (Assistant Director), Amy Anderson (analyst in charge), Benjamin L. Sponholtz, Shirley Abel, Marie Ahearn, James Rebbe, Shaunyce Wallace, and Candice Wright made significant contributions to this report. In addition, key support was provided by James Bennett, Rachael Chamberlin, Susan Chin, David Chrisinger, Larry Crosland, Karin Fangman, Maria Gaona, Gretta Goodwin, Christopher Jones, Julia Kennon, Jason Kirwan, Kathy Leslie, Benjamin Licht, Steven Lozano, Monica Perez-Nelson, and Walter Vance.
NMB was established under the Railway Labor Act to facilitate labor relations for railroads and airlines by mediating and arbitrating labor disputes and overseeing union elections. The FAA Modernization and Reform Act of 2012 included a provision for GAO to evaluate NMB programs and activities every 2 years. GAO's first report under this provision, issued in December 2013, included seven recommendations for NMB based on assessments of policies and processes in several management and program areas. This second report examines the extent to which NMB has 1) implemented recommendations made by GAO in December 2013, and 2) incorporated key procurement practices. GAO reviewed relevant federal laws, regulations, and NMB documents, such as its strategic and workforce plans; and contracting data for fiscal years 2014-2015; and interviewed NMB officials. The National Mediation Board (NMB) has made some progress in addressing the seven recommendations GAO made in December 2013; however, additional actions are needed to fully implement those recommendations and strengthen operations (see table). Without full implementation, NMB lacks reasonable assurance that its limited resources are effectively targeted and may be missing opportunities to improve performance and mitigate risks in program and management areas. Source: GAO analysis of NMB documents and interviews with officials. | GAO-16-240 NMB is following some key procurement practices that GAO has identified in prior work. However, NMB has not developed and implemented written policies and processes—consistent with internal control standards and best practice—that reflect its new interagency agreement with the Department of the Treasury for the performance of certain procurement functions. Without this documentation NMB cannot ensure the use of consistent processes in its new procurement environment. GAO recommends that NMB develop and implement written policies and processes to reflect its current procurement environment. NMB agreed with the recommendation and indicated it would take steps to implement it.
The National Defense Authorization Act for Fiscal Year 2002 extended the authority of the 1990 BRAC legislation, with some modifications, to authorize an additional BRAC round in 2005. Under section 2912 of the 1990 Act and as part of its fiscal year 2005 budget submission, DOD was required to submit a 20-year force structure plan, an infrastructure inventory, and a certification that additional closures and realignments were needed and that annual net savings would be achieved for each military department by fiscal year 2011. The force structure plan was to be based on assessments by the Secretary of Defense of the probable threats to national security between fiscal years 2005 and 2025. Furthermore, the plan was to be based on the probable end strengths and major military force units (land divisions, carrier and other major combatant vessels, and air wings) needed to meet these threats. DOD was also required to prepare a comprehensive inventory of military installations worldwide that indicated the number and type of facilities in the active and reserve forces of each military department. Using the force structure plan and the infrastructure inventory, the Secretary of Defense’s submission to Congress was required to address (1) the inventory necessary to support the force structure, (2) the categories of excess infrastructure and infrastructure capacity, and (3) an economic analysis of the effect of the closure or realignment of military installations to reduce excess capacity. In analyzing the infrastructure requirements, DOD was to consider the continuing need for and availability of military installations outside the United States and any efficiency that may be gained from joint tenancy by more than one branch of the Armed Forces on military bases. On the basis of the force structure plan, the infrastructure inventory and the economic analysis, the Secretary was required to certify whether the need existed for further closures and realignments and, if so, that an additional round would result in annual net savings for each military department, beginning not later than 2011. Collectively, these requirements were to be addressed in a report to Congress at the time it submitted its fiscal year 2005 budget justification documentation. The legislation also stipulated that if the certifications were provided in DOD’s report to Congress, we were to evaluate the force structure plan, infrastructure inventory, and the final selection criteria, and the need for an additional BRAC round. We were required to issue a report not later than 60 days after DOD submitted its report to Congress. Section 2913 of the 1990 Act, as amended, also required the Secretary of Defense to publish in the Federal Register the selection criteria for use in the BRAC 2005 round and to provide an opportunity for public comment. The legislation required that military value be the primary criteria for making recommendations to close or realign military installations, and directed inclusion of a number of considerations in formulating the selection criteria. The proposed selection criteria were published on December 23, 2003, with a public comment period ending January 30, 2004. The final criteria were published on February 12, 2004. We were also required by the legislation to evaluate the final selection criteria as part of our overall assessment of DOD’s reporting on BRAC issues in 2004. This is in keeping with GAO’s longstanding role as an independent, objective observer of the BRAC process. Legislation authorizing the 2005 round continued the previous legislative requirement, applicable to earlier BRAC rounds that we review the Secretary’s recommendations and selection process; it requires us to report to the congressional defense committees no later than July 1, 2005, 45 days after the last date by which the Secretary must transmit to the congressional defense committees and the BRAC Commission his recommendations for closures and realignments. To make an informed and timely assessment, we have consistently operated in a real-time setting and have had access to significant portions of the process as it has evolved, thus affording the department an opportunity to address any concerns we raised on a timely basis. From our vantage point, we are looking to see to what extent DOD follows a clear, transparent, consistently applied process, where we can see a logical flow between DOD’s analysis and its decision making. DOD’s report to Congress generally addressed all of the requirements in section 2912 of the Defense Base Closure and Realignment Act of 1990, as amended, and separately complied with the requirements in section 2913 for adopting selection criteria to guide BRAC decision making. In some instances, according to DOD officials there were limitations in the data provided in DOD’s Section 2912 report in order to avoid preempting or prejudging the ongoing analytical process for the 2005 BRAC round. Table 1 details the legislative requirements for DOD’s Section 2912 report, indicates the pages in DOD’s report where the issues are addressed, and provides our observations on the extent to which DOD provided the information required by each subsection in the legislation. Likewise, as discussed in a subsequent section, DOD also complied with the requirements of Section 2913 in adapting its selection criteria for the 2005 BRAC round. While DOD’s worldwide military installation inventory, 20-year force structure plan, and selection criteria are all important in setting a framework for the BRAC process, the latter two figure prominently in guiding BRAC analyses for the 2005 round. Although DOD provided a worldwide inventory of installations and facilities for each military department as required by the legislation, it exceeds the needs of the 2005 BRAC process, which focuses on domestic bases. Further, to the extent one looks to the inventory as providing a total accounting of DOD facilities worldwide, it should be noted that the inventory lacks completeness in that not all overseas installations and associated facilities where U.S. forces are deployed are included—primarily because some are considered temporary in nature. The unclassified portion of the force structure plan, extending through 2009, has more of a macro-level focus reflecting limited change across the military services, even though the services have a number of initiatives under way that could affect force structure and infrastructure requirements. Nevertheless, DOD’s ongoing BRAC analysis will need to consider the impact of such changes on infrastructure requirements. The department’s final selection criteria, although incorporating legislatively directed language, essentially follows a framework similar to that employed in prior BRAC rounds. The full analytical sufficiency of the criteria will best be assessed through their application, as DOD completes its data collection and analysis for the 2005 round. As required by the legislation, DOD provided a worldwide inventory of installations, which included the number and type of facilities in the active and reserve forces. While the inventory provides a detailed listing of facilities, it extends beyond the needs of the 2005 BRAC round with its focus on domestic installations. At the same time, it has some limitations in terms of a complete inventory for use beyond BRAC because it does not include all overseas installations. For example, the inventory omits various installations and associated facilities located in parts of the Middle East, such as Iraq, Afghanistan, and Kuwait. DOD and military service officials told us that these installations are considered temporary or classified in support of contingency operations, and are not included in the database used to generate the inventory. This limitation should not impact the conduct of the 2005 BRAC round since the focus is on domestic bases, and DOD has identified the domestic bases in the database to assess in the BRAC 2005 round. The inventory of installations and facilities was derived from DOD’s Facilities Assessment Database, which is updated annually from the military services’ real property databases. Because of time constraints, we performed only limited work on the accuracy of the inventory. Contractors who maintain the Facilities Assessment Database told us that since 1998 they have validated and verified facility data annually by performing data queries—such as verifying the size of buildings or the year a facility was acquired or built—to identify anomalies in the data. Contractor officials stated the queries have been successful in correcting erroneous data reported by the services and that the quality of the data has improved since 1998. As with prior BRAC rounds, DOD has provided Congress with a force structure plan that will guide or inform BRAC decisions in 2005, except legislation authorizing the 2005 BRAC round required development of a 20-year plan instead of a 6-year plan required in prior rounds. DOD’s Section 2912 report contains the unclassified portion of DOD’s 20-year plan extending through fiscal year 2009; the remaining years of the plan are addressed in a classified annex to the report. The unclassified report provides more of a macro-level focus (e.g., number of Army divisions) reflecting limited changes across the military services, even though the services have a number of initiatives under way that could affect force structure and infrastructure requirements, and which will need to be considered by DOD as it performs its 2005 round analyses. DOD has the option of modifying its force structure plan, as needed, with its fiscal year 2006 budget submission which would be expected prior to its issuance of BRAC recommendations. Table 2 summarizes DOD’s force structure plans at the macro-level through 2009 by service force units and by end strength. It depicts limited changes in force units and end strength for active and reserve components of most services. Exceptions include the Navy, which expects to reduce personnel but increase the number of ships in its inventory, and the Air Force, which plans a slight increase in reserve personnel end strength. While the Army showed no force structure changes through 2009, Army officials told us that they have a number of initiatives under way that may affect the force structure and related infrastructure requirements. Specifically, the Army is restructuring the way it organizes its forces to achieve greater flexibility by increasing the number of brigade combat teams from 33 to 43 or more. To achieve these goals while maintaining global commitments, the Army has been authorized by the Secretary of Defense to temporarily increase its end strength by 30,000 personnel through fiscal year 2007. Congress is considering legislation to permanently authorize this increase. In addition, the Army is in the process of rebalancing capabilities between the active and reserve components by moving certain early-deploying and high-demand capabilities such as military police and civil affairs from the reserve components into the active force. Although the BRAC statute allows DOD to submit a revised force structure plan with the fiscal year 2006 budget submission, Army officials told us that many of the details about this restructuring would not be completed by this timeframe. Navy officials told us that their plans include the commissioning of 17 new ships (13 Arleigh Burke destroyers, 2 submarines, 1 amphibious ship, and 1 littoral combat ship) while decommissioning 2 older ships. Navy officials indicated that the projected reductions in the number of active personnel result primarily from decommissioning ships and air squadrons and changes to crew requirements on some ships, and the projected reduction in reserve personnel is caused primarily by plans to deactivate 7 maritime patrol squadrons. Navy officials also noted plans to increase the number of ships in its inventory in future years but also have efforts under way to reduce average crew size per ship. Although the force structure plan shows a planned increase in the number of ships, available information indicates some uncertainty over the total number of ships the Navy may expect for its future force structure. Air Force end strength levels shown in the force plan reflect authorized levels and not the current over-strength levels, reflecting Air Force expectations of reducing the current levels to those authorized. While the Air Force showed minimal force structure changes through 2009, an Air Force official stated that the service plans to increase the number of aircraft per squadron as well as increase crew ratios to make more effective use of fewer but more capable aircraft, which would most likely reduce future infrastructure requirements. We have previously reported that the Air Force could not only reduce infrastructure by increasing the number of aircraft per fighter squadron but could also save millions of dollars annually by doing so. We recognize that developing a 20-year force structure plan is a challenging task for the department, given a host of uncertainties about the future security environment, potential technology advances and their application to the future force, and ongoing departmental transformation efforts. The uncertainties are evident in various ongoing defense programs. While increased use of unmanned aerial vehicles, for example, could have far-reaching effects for future defense force structure, we noted in a recent report that DOD’s approach to planning for developing and fielding this capability does not provide reasonable assurance that its investment will facilitate the integration of these vehicles into the force structure efficiently. Further, DOD officials told us that another challenging aspect of its force structure planning resides in the longer term (those years beyond 2009) of the plan. In addition to the uncertainties cited above, these longer-term years are characterized by additional unknowns regarding future funding levels that could impact the future force structure and associated requirements, such as the total number of ships for the Navy. Despite these inherent uncertainties, however, the department must factor in relevant assumptions about potential future force structure changes and surge requirements as it performs its analyses for the upcoming BRAC round. The department’s final selection criteria essentially follow a framework similar to that employed in prior BRAC rounds, with specificity added in selected areas in response to requirements contained in legislation authorizing the 2005 BRAC round. The Defense Base Closure and Realignment Act of 1990, as amended in 2002, required DOD to give priority to selection criteria dealing with military value, including (1) the impact on joint war fighting, training, and readiness; (2) the availability and condition of training areas suitable for maneuver by ground, naval, or air forces throughout diverse climates and terrains and staging areas for use by the Armed Forces in homeland defense missions; and (3) the ability to accommodate contingency, mobilization, and future force requirements. The legislation also required DOD to give special consideration to other criteria, many of which parallel those used in prior BRAC rounds. Furthermore, the legislation required DOD to consider cost impacts to other federal entities as well as to DOD in its BRAC decision making. Additionally, the National Defense Authorization Act for Fiscal Year 2004 requires DOD to consider surge requirements in the 2005 BRAC process. Table 3 compares the 1995 BRAC criteria with that adopted for 2005, with changes highlighted in bold. Our analysis of lessons learned from prior BRAC rounds affirmed the soundness of these basic criteria and generally endorsed their retention for the future, while recognizing the potential for improving the process by which the criteria are used in decision making. Notwithstanding our endorsement of the criteria framework, in a January 27, 2004, letter to DOD, we identified two areas in which we believed the draft selection criteria needed greater clarification to fully address special considerations called for in the legislation (see app. III). Specifically, we noted that the criterion related to cost and savings does not indicate the department’s intention to consider potential costs to other DOD activities or federal agencies that may be affected by a proposed closure or realignment recommendation. Also, we pointed out the criterion on environmental impact does not clearly identify to what extent costs related to potential environmental restoration, waste management, and environmental compliance activities would be included in cost and savings analyses of individual BRAC recommendations. We suggested that DOD could address our concerns by incorporating these considerations either directly, in its final criteria, or through later explanatory guidance. DOD indicated it would address our concerns through clarifying guidance rather than a change to the criteria. We have not yet seen that guidance. DOD also received a variety of other comments on the draft criteria from members of Congress, other elected representatives, and the general public but did not make any changes before issuing the final criteria. Most of these comments involved the military value criteria (see table 3: 1-4) and centered on the maintenance of adequate surge capacity; the roles military installations fulfill in homeland defense missions; the unique features of research, development, test, and evaluation facilities; and the preservation of vital human capital in various support functions. In responding to those comments, DOD expressed the view that the draft criteria adequately addressed these issues and did not see the need to make any changes to its draft criteria. For example, DOD said that surge requirements will be addressed under criterion one, which requires the department to consider “current and future mission capabilities,” and criterion three, which requires DOD to consider an installation’s ability to “accommodate contingency, mobilization, and future total force requirements” to support operations and training. Furthermore, DOD noted that the National Defense Authorization Act for Fiscal Year 2004 requires the Secretary of Defense to “assess the probable threats to national security” and determine “potential, prudent, surge requirements” as part of BRAC 2005. DOD also noted that criterion two recognizes the role of military installations as staging areas for forces conducting homeland defense missions. Collectively, in our view, many of the public comments on DOD’s criteria expressed concern that the criteria for the 2005 BRAC round focused more on assessing military value based on military missions and operational capabilities without recognizing important support capabilities such as research, development, test, and evaluation. Although modifications to the criteria might have been made to address some of these concerns, the absence of such changes does not necessarily mean that these issues will not be considered in applying the criteria during the BRAC process. For example, the department has established a variety of joint cross-service groups to analyze various support functions during the upcoming round and each group will have to adapt the final criteria for its particular support area to assess military value related to each functional area. While our monitoring of the ongoing BRAC process indicates this is occurring, the effectiveness of these efforts will best be assessed as these groups complete their work. Other BRAC-related issues included in DOD’s report—excess infrastructure capacity, estimated savings for the 2005 round, and the economic impact of prior BRAC actions on communities—are of widespread interest to Congress and the public and important to DOD’s certification regarding the need for a BRAC round. Although the methodology DOD employed to identify excess capacity has some limitations, DOD’s report does provides a rough indication that excess base capacity exists. Further, historical financial data would suggest that, assuming conditions similar to those in the 1993 and 1995 round, each of the military departments could achieve annual net savings by 2011. As to economic impact, our work has shown that many communities surrounding closed bases from the previous rounds have fared better than the national average, in terms of changes in unemployment rates and per capita income, with more mixed results recently, allowing for some negative effect from the economic downturn in recent years. While DOD’s analysis of its infrastructure capacity for the 2004 report, which was completed outside the 2005 BRAC process, gives some indication of excess capacity across certain functional areas through fiscal year 2009, the methodology for that analysis has some limitations that could cause the results to be either overstated or understated, and raises questions about use of the methodology to project a total amount of excess capacity across DOD. At the same time, DOD’s methodology did not consider any additional excess capacity that might occur by analyzing facilities or functions on a joint or cross-service basis, a priority for the 2005 round. A more complete assessment of capacity and the potential to reduce it must await the results of the current BRAC analyses being conducted by DOD. To estimate excess capacity, the military services and the Defense Logistics Agency (DLA) compared the capacity for a sample of bases in 1989 with the projected capacity of a sample of bases in 2009. The services and DLA categorized the bases according to their primary function, and they identified a variety of indicators, or metrics, to measure capacity for each functional category. For example, they used total maneuver acres per brigade to establish capacity for Army training bases, total square feet of parking apron space to establish capacity for active and reserve Air Force bases, and total direct labor hours (versus budget or programmed direct labor hours) to establish capacity for Navy aviation depots. See app. IV for additional information on how DOD computed excess capacity. This methodology has some limitations as we reported in 1998 when DOD used it to project excess capacity in supporting the need for a future BRAC round. DOD’s use of 1989 as a baseline did not take into account the excess base capacity that existed in that year prior to base closures in the 1988, 1991, 1993, and 1995 BRAC rounds. As a result, the percentage of increased excess capacity reported understated actual excess capacity by an unknown amount for some functional categories, and may have overstated excess capacity for other categories. The Congressional Budget Office (CBO) also reported that the department’s use of 1989 as a baseline did not take into account the excess capacity that might have existed in 1989. Furthermore, CBO reported that the approach could understate the capacity required if some types of base support are truly a fixed cost, regardless of the size of the force. The methodology also did not consider any additional excess capacity that might occur by analyzing facilities or functions on a cross-service basis, a priority for the 2005 round. In addition, capacity for some functions was measured differently for each service. For example, the Army and Air Force measured capacity for test and evaluation facilities in terms of physical total square feet of space, while the Navy measured its capacity for these facilities in terms of work years. Finally, as we recently noted, the variety of metrics and differences across the military services makes it difficult to be precise when trying to project a total amount of excess capacity across DOD. Military service officials told us that they typically use most of the capacity metrics included in DOD’s report, along with other measures, to assess excess capacity. For example, these officials stated that the metrics for depots, industrial, shipyards, logistics bases, and supply are used, along with other measures, as indicators of excess capacity. However, we found that some of the metrics used in DOD’s report were less reliable than others as indicators of excess capacity. For example, the metric for Marine Corps bases compared the acres at five Marine Corps bases to the total authorized military personnel for the Marine Corps, and not just the authorized personnel at the five bases. Marine Corps officials acknowledged that this was not a requirements-based metric to measure excess capacity at Marine Corps bases. Likewise, the metric for administrative space in the Air Force was based on the administrative space at only one Air Force base. Air Force officials stated that this occurred because under the methodology each Air Force base could only be considered in one functional area. While prior BRAC rounds have focused primarily on reducing excess capacity, DOD officials have stated this is not the sole focus of the 2005 BRAC round. These officials noted that the 2005 round aims to further transform the military by rationalizing base infrastructure to the force structure, enhance joint capabilities by improving joint utilization, and convert waste to war-fighting capability by eliminating excess capacity. This approach has the potential to identify greater excess capacity than previously identified. However, a true assessment of excess capacity must, of necessity, await the completion of DOD’s ongoing official analyses under BRAC 2005. DOD’s financial data would suggest that, assuming conditions similar to those of the 1993 and 1995 rounds, the net annual savings for each of the military departments for the 2005 round could be achieved by 2011, as certified by the Secretary in DOD’s report. DOD estimated that it would accrue net annual savings of $3 billion to $5 billion departmentwide by 2011. While we believe that the potential exists for significant savings to result from the 2005 BRAC round, it is difficult to conclusively project the expected magnitude of the savings because there simply are too many unknowns, such as the specific timing of individual closure or realignment actions and the extent to which DOD’s efforts to maximize joint utilization and further its transformation efforts, would impact savings. Finally, to what extent forces that are currently based overseas may be redeployed to the United States and what effect that redeployment may have on BRAC and subsequent savings remains an unknown as well. The Secretary’s estimate of $3 billion to $5 billion in net annual savings by 2011 was based in part on savings achieved from the 1993 and 1995 BRAC rounds. The lower estimate assumes that the actions in the 2005 round would reduce infrastructure by about 12 percent, comparable to the reduction that occurred in the 1993 and 1995 rounds combined. The higher estimate assumes that infrastructure would be reduced by 20 percent, which is about 67 percent higher than the previous two rounds combined. While we believe the potential for significant savings exists, a more reliable estimate of savings is not practical until the department has developed actual closure and realignment proposals. While DOD’s report estimated net annual savings of $3 billion to $5 billion could be achieved departmentwide, it did not explicitly indicate the amount of savings that each service would achieve by 2011. Our analysis of the savings from the 1993 and 1995 BRAC rounds, however, indicates that each department accrued net annual savings by the sixth year of implementation, as seen in table 4. Another way of looking at net savings is to consider the point at which cumulative savings exceed the cumulative costs of implementing BRAC decisions over a period of years. Experience has shown that the department incurs significant upfront investment costs in the early years of a BRAC round, and it takes several years to fully offset those cumulative costs and begin to realize cumulative net savings. The difference in the terminology is important to understand because it has a direct bearing on the magnitude and assessment of the savings at any given time. As previously discussed, each military department achieved net annual savings during the 1993 and 1995 rounds by the sixth year of implementation. However, with the exception of the Navy in 1995, the military departments did not achieve cumulative net savings for both the 1993 and 1995 rounds until after the sixth year of implementation. Notwithstanding the issues we raise that could affect savings, we continue to believe that it is vitally important for DOD to improve its mechanisms for tracking and updating its savings estimates. We have previously noted that DOD’s BRAC savings estimates have been imprecise for a variety of reasons such as weaknesses in DOD’s financial management systems that limit the ability to fully account for the cost of its operations; the fact the DOD’s accounting systems like other accounting systems are oriented to tracking expenses and disbursements, not savings; the exclusion of BRAC-related costs incurred by other government agencies; and inadequate updating of the savings estimates that are developed. Improvements can and should be made to address this issue. In its 1998 report to the Congress on BRAC issues, DOD proposed efforts that, if adopted, could provide for greater accuracy in the estimates. Specifically, DOD proposed developing a questionnaire that would be completed annually by each base affected by BRAC rounds during the 6-year implementation period. The questionnaire would request information on costs, personnel reductions, and changes in operating and military construction costs in order to provide greater insight into the savings created by each BRAC action. DOD suggested that developing such a questionnaire would be a cooperative effort involving the Office of the Secretary of Defense, the military services, the defense agencies, the Office of the DOD Inspector General, and the service audit agencies. This proposal recognizes that better documentation and updating of savings will require special efforts parallel to the normal budget process. DOD has not yet initiated actions to implement this proposal. We strongly endorse such action. If DOD does not take steps to improve its estimation of savings in the future, then previous questions about the reliability, accuracy, and completeness of DOD’s savings estimates will likely continue. We intend to examine DOD’s progress in instituting its proposed improvements during our review of the 2005 BRAC process. The department’s report recognized that BRAC actions can affect the local economies of the surrounding communities. It noted that from 1988 through 1995, realignment or closure actions were approved at 387 locations; and that, in implementing the actions, the department had sought to minimize any adverse local impacts with a coordinated program of federal assistance from both DOD and domestic agencies. Our own work has shown that while the short-term impact can be very traumatic, several factors, such as the strength of the national and regional economies, play a role in determining the long-term economic impact of the base realignment or closure process on communities. Our work has also shown that many communities surrounding closed bases from the previous rounds have fared better than the national average, in terms of changes in unemployment rates and per capita income, with more mixed results recently, allowing for some negative effect from the economic downturn in recent years. Our analysis of selected economic indicators has shown that over time the economies of BRAC-affected communities compare favorably with the overall U.S. economy. We used unemployment rates and real per capita income rates as broad indicators of the economic health of those communities where base closures occurred during the prior BRAC rounds. Our analysis included 62 communities surrounding base realignment and closure sites from all four BRAC rounds for which government and contractor civilian job losses were estimated to be 300 or more. We previously reported that as of September 2001, of the 62 communities surrounding these major base closures, 44 (71 percent) had average unemployment rates lower than the (then) average 9-month national rate of 4.58 percent. We are currently updating this analysis and attempting to assess the impact of the recent economic downturn on these communities. Our preliminary results indicate that, in keeping with economic downturn in recent years, the average unemployment rate in 2003 had increased for 60 of the 62 communities since 2001. However, the 2003 unemployment figures indicated that the rates for these 62 communities continue to compare favorably with the overall U.S. rate of 6.1 percent; that is, 43 (or 69 percent) of the communities had unemployment rates at or below the U.S. rate. In our previous work, we had also reported that annual per capita income growth rate of affected communities for these 62 BRAC-affected communities compared favorably with national averages. We found that from 1996 through 1999, 33 (or 53 percent) of the 62 communities had an estimated annual real per capital income growth rate that was at or above the average of 3.03 percent for the nation at that time. Our recent analysis has also noted that changes in the average per capita income growth rate of these communities over time compared favorably with corresponding changes at the national level. This analysis indicates that 30 (48 percent) of the 62 areas examined had average income growth rates higher than the average U.S. rate of 2.2 percent, a drop from the rate during the previous time period. In our previous report, we identified a number of factors that affected economic recovery, based on our discussions with various community leaders. These factors included robustness of the national economy, diversity of the local economy, regional economic trends, natural and labor resources, leadership and teamwork, public confidence, government assistance, and reuse of base property. If history is any indication, these factors are likely to be equally applicable in dealing with the effects of closures and realignments under BRAC 2005. In transmitting the 2004 report to Congress, the Secretary of Defense certified the need for an additional BRAC round. The certification was predicated on the force-structure plan and infrastructure inventory included with the report and was reinforced by the department’s assessment of excess capacity, economic impact, and a certification that net annual savings from a 2005 round could be achieved by 2011. The Secretary’s certification of need for the 2005 BRAC round was echoed by a separate March 22, 2004, memorandum to the Secretary from the Chairman of the Joint Chiefs of Staff. It stated that the Joint Chiefs unanimously agree that additional base realignments and closures are necessary if DOD is to transform the armed forces to meet the threats to national security and execute national strategy. The Chairman also noted that “(d)uring this period of transition, we are fundamentally reconfiguring our forces to meet new security challenges. The military value requirements that flow from future force structure and future strategy needs will differ in character and shape from those of today. BRAC offers a critical tool to turn transformational goals into reality.” We found no basis to question DOD’s certification of the need for an additional BRAC round. The need for an additional BRAC round has long been recognized by various defense officials and studies—and noted in various GAO products since the time of the 1995 BRAC round. (See app. V for a summary of key points from selected GAO products.) The Secretary’s certification of the need for a 2005 BRAC round is underscored by the department’s desire to realize broader objectives in the 2005 round, including fostering jointness, transformation, assessing common business oriented functions on a cross-service basis, and accommodating the potential redeployment of some forces from overseas bases back to the United States. Analyses conducted in these areas could identify opportunities to achieve consolidations and reduce capacity not previously identified. Having said that, we believe the efficacy and sufficiency of DOD’s BRAC analyses now under way—considering the force structure plan, inventory, and selection criteria—can best be assessed as the BRAC process unfolds. While we found no basis to question the Secretary’s certification of the need for an additional BRAC round, we identified some limitations with the department’s assessment of excess capacity, completed outside the BRAC process, to meet the 2004 reporting requirement. While clear limitations exist in DOD’s assessment of excess capacity, it does nonetheless point to some areas that warrant additional analysis—and the current BRAC process is an appropriate forum for doing so. Today’s security environment is evolving, as are force structure requirements along with technology advancements, and defense transformation efforts. The department must consider ongoing force transformation initiatives in its BRAC analyses as well as factor in relevant assumptions about the potential for future force structure changes—changes that likely will occur long after the timeframes for the 2005 BRAC round. This includes consideration of future surge requirements. Assuring Congress and the public that this analysis has been done and that appropriate allowances for future force structure changes have been incorporated into the process will be key to building public confidence in the soundness of 2005 closure and realignment recommendations. Full discussion of these issues by the department in its report accompanying its BRAC recommendations in 2005 is warranted. At the same time, consideration of these longer term issues should not detract from opportunities available to DOD in the upcoming BRAC round to achieve greater economies and efficiencies in support capabilities and use of infrastructure through cross-servicing and joint utilization of bases. Finally, many questions have previously existed about the accuracy and precision of DOD’s estimates of savings from prior BRAC rounds. Weaknesses in DOD’s financial management systems have contributed to this problem and are not likely to be resolved in the near term. At the same time, we have previously recommended, and DOD has agreed that improvements can and should be made to the accounting for and periodic updating of BRAC savings. That notwithstanding, DOD has not made sufficient efforts to address this issue. DOD needs to provide assurance that it has plans in place for improvements in this area before it begins implementing any closure and realignment decisions from the upcoming BRAC round. We recommend that the Secretary of Defense include in his May 2005 report on recommendations for base closures and realignments a full discussion of relevant assumptions, and allowances made for potential future force structure requirements and changes, including the potential for future surge requirements. To ensure that the Department of Defense and the military services improve their tracking and updating of BRAC savings estimates associated with implementing closure and realignment decisions for the upcoming BRAC round, Congress may want to consider requiring DOD and the military services to provide certification that actions have been taken to implement previously planned improvements for tracking and updating its BRAC savings estimates. This certification should be submitted with its fiscal year 2006 budget request documentation. In commenting on a draft of this report, the Deputy Under Secretary of Defense (Installations and Environment) agreed with our report. DOD’s comments are included in appendix VI of this report. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. The report will also be available to others upon request and can be accessed at no charge on GAO’s Web site at http://www.gao.gov. In addition, a list of our key prior reports on base realignments and closures is included in appendix VII and these reports can be accessed on our Web site as well. Please contact me on (202) 512-8412 if you or your staff have any questions regarding this report. Additional contacts and staff acknowledgments are provided in appendix VIII. The scope of this report was determined by the legislative requirements imposed on us and included in sections 2912 and 2913 of the Defense Base Closure and Realignment Act of 1990, as amended. Our focus was to assess the Department of Defense’s (DOD) March 24, 2004, report to Congress regarding issues associated with the need for an additional BRAC round as well as the final selection criteria for the upcoming 2005 BRAC round as published in the Federal Register on February 12, 2004. Because of time constraints, we could not fully assess the accuracy of all data used in the report but performed limited reliability assessments of key data contained in DOD’s report and determined that the data were sufficiently reliable for the purposes of this report, with relevant limitations noted. We evaluated DOD’s responsiveness to the legislative reporting requirements by comparing individual requirements as presented in the legislation with DOD’s presentation of information in its report and final selection criteria. Where appropriate, we made judgments as to the extent to which DOD addressed the requirements, and discussed with DOD officials those areas where we believed the requirements were not fully addressed. In some cases, DOD officials from the BRAC Office within the Office of the Secretary of Defense (OSD) told us that the information provided was somewhat limited in order to avoid preempting or prejudging the ongoing analytical process for the 2005 BRAC round. To address the importance of the worldwide installation inventory, force structure plan, and selection criteria and evaluated, where appropriate, the analytical sufficiency and accuracy of each, we interviewed DOD officials to obtain their views on the relative importance and applicability of each to the BRAC 2005 process and analyzed the corresponding documentation for analytical sufficiency and accuracy where it was reasonable to do so. More specifically, to evaluate the worldwide installation inventory, we interviewed officials from the contracting firm responsible to DOD for managing its Facilities Assessment Database, the DOD-wide database that was used to compile the worldwide inventory. Our interest was in documenting the contractor’s process for validating the real property data in the database. Because the DOD-wide database draws from the services’ real property databases, we reviewed the contractor’s analysis of anomalies identified in the services’ real property databases (i.e., the Army’s Integrated Facilities System, the Navy’s and Marine Corps’ Navy Facility Assets database, and the Air Force’s Automated Civil Engineer System) to gain a sense of the relative accuracy of the data. We also compared the list of Army, Navy, Marine Corps, and Air Force installations receiving the recent data capacity call for the 2005 BRAC round to the installation inventory to assure ourselves that these installations were a subset of the worldwide inventory. Furthermore, to determine if the inventory included all overseas installations, we compared the listed installations by country to a list of countries where U.S. forces are currently deployed. We then interviewed a DOD official to verify and obtain rationale for the absence of some overseas installations in the inventory. To evaluate the unclassified portion (fiscal years 2005 through 2009) of DOD’s 20-year force structure plan as presented in DOD’s 2004 report, we identified major force unit and personnel end strength changes by service over the specified time frame and sought out rationale for the increases or decreases. We discussed with service officials the nature of these changes and how these revisions would be considered in the BRAC process. We also interviewed service officials regarding a number of initiatives under way, such as the Army’s efforts to increase the number of brigades in its force, that have implications for the future sizing and composition of the force structure and associated infrastructure for those respective services. We inquired as to when planned force structure changes stemming from these initiatives would be incorporated into DOD’s force structure plan. To evaluate the final selection criteria for the upcoming 2005 round, we compared the criteria as published in the Federal Register on February 12, 2004, with those used in the 1995 BRAC round. In so doing, we noted the differences and evaluated whether the legislatively directed language regarding selection criteria was incorporated into the revised criteria for the upcoming round. In addition to discussing with DOD officials the use of these criteria as part of a framework for conducting its base analyses for the 2005 round, we relied on our prior work that reported on lessons learned from previous base closure rounds, which covered, among other topics, the analytical sufficiency of the selection criteria. We also referred to a January 27, 2004, letter we sent to the Acting Under Secretary of Defense (Acquisition, Technology & Logistics) commenting on our analysis of the draft criteria that were out for public comment at that time. Finally, we reviewed the public comments received on the draft selection criteria and discussed with DOD officials their rationale for not incorporating any of the suggested changes into the final selection criteria. While the mandate did not specifically require us to address excess defense infrastructure capacity, estimated BRAC savings from the 2005 round, and economic impact of communities surrounding base closures in prior rounds, as discussed in DOD’s 2004 report, we chose to do so because of widespread interest in Congress and the public and its importance to DOD’s certification of the need for a BRAC round. In addition to an analysis of these topics as presented in DOD’s 2004 report, we relied on prior and ongoing work related to these areas of interest. More specifically, to evaluate the analytical sufficiency of DOD’s excess capacity analysis, we interviewed DOD and service officials and reviewed documentation describing DOD’s methodology. We inquired about the reasonableness of the various metrics used to develop the capacity measures for the various functional support areas, such as depots, identified in the analysis in DOD’s report. We verified the calculations of increases in each of the functional areas and on an aggregate basis, and partially verified the data reported by the services in making the comparisons of capacity between the 1989 baseline year and 2009. DOD’s BRAC Office provided the services with the 1989 baseline numbers for the various metrics used to measure capacity. We were unable to verify the 1989 baseline data in DOD’s report for the Army and Department of the Navy, which had accepted the numbers, because supporting documentation from DOD’s development of that data had not been retained from the time that data were first developed in 1998 for an earlier DOD report. However, we did verify the Air Force’s 1989 baseline numbers because it revised the DOD-provided 1989 baseline numbers using available data. We also selectively verified the projected 2009 data in the analysis. To evaluate whether DOD’s estimates for expected savings from the upcoming 2005 round were reasonable, we interviewed a DOD official in the Office of the Secretary of Defense (OSD) BRAC Office and examined the methodology, to include assumptions and the underlying basis employed by DOD in deriving the estimates. Because a key assumption for building the estimates focused on the probable range of reductions for aggregate plant replacement value reductions (i.e., the scope of the infrastructure reduction) that had occurred across a combination of the 1993 and 1995 rounds, we were not in a position to question whether this assumption would be valid for the 2005 round, given that the analysis for the 2005 round has not yet been completed. As to whether DOD can achieve the net annual savings for each military department by 2011, we reviewed DOD’s historical financial data for the 1993 and 1995 round to ascertain if the military departments achieved net annual savings by the final or sixth year of implementation for these rounds. This would correspond to the year 2011 for the 2005 round and again would assume that the 2005 round would be similar to that of the 1993 and 1995 rounds. To evaluate the economic recovery of communities affected by the BRAC process in the prior rounds, we first performed a broad-based economic assessment of 62 communities where more than 300 civilian jobs were eliminated during the prior closure rounds. This work was essentially an update of similar work we had performed and reported on in April 2002. We used two key economic indicators—unemployment and real per capital growth rates—as measures to analyze changes in the economic condition of communities over time in relation to the national averages. We chose unemployment and real per capital income as key performance indicators because (1) DOD used these measures in its community economic impact analysis during the BRAC location selection process and (2) economists commonly use these measures in assessing the economic health of an area over time. While our assessment does provide an overall picture of how these communities compare with the national averages, it does not necessarily isolate the condition, or the changes to the condition, that may be attributed to the BRAC action. We conducted our work from March to May 2004 in accordance with generally accepted government auditing standards. October 20, 2005 (If required) To perform the capacity analysis, the services and the Defense Logistics Agency (DLA) compared capacity in a sample of bases in 1989 to the capacity for a sample of bases in 2009. The services then categorized the bases according to their primary missions and defined indicators of capacity, or metrics, for each category. DOD divided the metric by measures of force structure to determine a ratio and calculated the extent to which the ratio of capacity in 2009 exceeded the ratio in 1989. As an example, table 5 shows the results for the Army as shown in DOD’s report. Similar tables appear for the Navy, Air Force, and DLA in DOD’s report. DOD then took a weighted average of all functional areas to determine the overall excess capacity for each department. The weights were computed by the number of bases in a functional area divided by the total number of bases in all functional areas. Table 6 shows the overall estimated percentage of excess capacity for each military department and the DLA. Likewise, DOD computed a weighted average to estimate an overall percentage of excess capacity for DOD. The weights were computed from the number of bases per department divided by the total of all bases included in the analysis. At the time the 1995 BRAC round was being completed and subsequently, DOD officials, including the Secretary and the Chairman, Joint Chiefs of Staff, recognized that additional excess capacity would remain following that round and that future base realignments and closures would be needed. Various GAO products have noted that issue in subsequent years. The following are selected excerpts from key GAO products. “Despite these recent BRAC rounds, DOD continues to maintain large amounts of excess infrastructure, especially in its support functions, such as maintenance depots, research and development laboratories, and test and evaluation centers. Each service maintains its own facilities and capabilities for performing many common support functions and, as a result DOD has overlapping, redundant, and underutilized infrastructure. DOD has taken some steps to demolish unneeded buildings on various operational and support bases; consolidate certain functions; privatize, outsource, and reengineer certain workloads; and encourage interservicing agreements—however, these are not expected to offset the need for additional actions. At the same time, DOD officials recognize that significant additional reductions in excess infrastructure requirements in common support areas could come from consolidating workloads and restructuring functions on a cross-service basis, something that has not been accomplished to any great extent in prior BRAC rounds.” U.S. General Accounting Office, Military Bases: Lessons Learned From Prior Base Closure Rounds, GAO/NSIAD-97-151 (Washington, D.C.: July 25, 1997, p. 3). “Notwithstanding the results of the four recent BRAC rounds, DOD officials recognized, even while they were finishing the 1995 round, that they had missed OSD’s goal in terms of reductions needed through base closures. DOD calculated that the first three BRAC rounds reduced the plant replacement value (PRV) of DOD’s domestic facilities by 15 percent. It established a goal for the fourth round of reducing PRV by an additional 15 percent, for a total of 30 percent. When the Secretary announced his recommendations for base closures and realignments in 1995, OSD projected that if all of the Secretary’s recommendations were adopted, the total PRV would be reduced by 21 percent, nearly a third less than OSD’s goal.” GAO/NSIAD-97-151, p. 17. “The Secretary of Defense’s 1997 Quadrennial Defense Review, which assessed defense strategy, programs, and policies, included the issue of future base closures in the infrastructure portion of the review. In his May 19, 1997, report to Congress on the results of this review, the Secretary asked Congress to authorize domestic base closure rounds in 1999 and 2001. That recommendation was endorsed by the National Defense Panel, the independent, congressionally mandated board that is reviewing the work of the Quadrennial Defense Review and completing its own review of defense issues.” GAO/NSIAD-97-151, p. 3. DOD’s Support Infrastructure Management has been designated as High- Risk by GAO since 1997. GAO’s January 2003 update noted that “DOD plans an additional base closure round in 2005; this could enable it to devote its facility resources on fewer, more enduring facilities. With or without base closures, DOD faces the challenge of adequately maintaining and revitalizing the facilities it expects to retain for future use. Available information indicates that DOD’s facilities continue to deteriorate because of insufficient funding for their sustainment, restoration, and modernization.” U.S. General Accounting Office, High-Risk Series: An Update, GAO-03-119 (Washington, D.C.: Jan. 2003). In commenting on DOD’s investment plans for reversing the aging of its facilities, we noted that “…because of competing priorities, DOD is not likely to realize its investment objectives for facilities in the near term. More specifically, the services do not propose to fully fund all of OSD’s objectives for improving facilities or, in some instances, the services have developed funding plans that have unrealistically high rates of increase in the out-years compared with previous funding trends and other defense priorities. The base realignment and closure round authorized for fiscal year 2005, while it carries with it a significant up-front investment cost to implement realignment and closure decisions, offers an important opportunity to reduce excess facilities and achieve greater efficiencies in sustaining and recapitalizing the remaining facilities if sufficient funding levels are maintained into the future. Additionally, DOD is reexamining its worldwide basing requirements, which could potentially lead to significant changes in facility requirements over the next several years. As these decisions are implemented over the next several years, this should permit DOD and the services to increasingly concentrate future resources on enduring facilities.” U.S. General Accounting Office, Defense Infrastructure: Long-term Challenges in Managing the Military Construction Program, GAO-04-288 (Washington, D.C.: Feb. 24, 2004). Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004. Military Base Closures: Better Planning Needed for Future Reserve Enclaves. GAO-03-723. Washington, D.C.: June 27, 2003. Military Base Closures: Progress in Completing Actions from Prior Base Realignments and Closures. GAO-02-433. Washington, D.C.: April 5, 2002. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial. GAO-01-971. Washington, D.C.: July 31, 2001. Military Bases: Status of Prior Base Realignment and Closure Rounds. GAO/NSIAD-99-36. Washington, D.C.: December 11, 1998. Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure. GAO/NSIAD-99-17. Washington, D.C.: November 13, 1998. Military Bases: Lessons Learned from Prior Base Closure Rounds. GAO/NSIAD-97-151. Washington, D.C.: July 25, 1997. Military Bases: Closure and Realignments Savings Are Significant, but Not Easily Quantified. GAO/NSIAD-96-67. Washington, D.C.: April 8, 1996. Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment. GAO/NSIAD-95-133. Washington, D.C.: April 14, 1995. Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments. GAO/NSIAD-93-173. Washington, D.C.: April 15, 1993. Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments. GAO/NSIAD-91-224. Washington, D.C.: May 15, 1991. Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations. GAO/NSIAD-90-42. Washington, D.C.: November 29, 1989. In addition to the individuals named above, Nelsie Alcoser, Nancy Benco, Ray Bickert, Joel Christenson, Warren Lowman, Tom Mahalek, David Mayfield, Charles Perdue, James Reynolds, and Laura Talbott made key contributions to this report.
The Defense Base Realignment and Closure Act of 1990, as amended, required the Department of Defense (DOD) to address several base realignment and closure (BRAC) issues in 2004 for the 2005 BRAC round to proceed. The requirements included reporting on a 20-year force structure plan, an inventory of military installations, and separately adopting selection criteria for the upcoming round. The legislation also required DOD to certify whether an additional BRAC round was needed, and, if so, that annual net savings would be realized not later than fiscal year 2011. If the certifications were provided, GAO was required to evaluate DOD's submissions and report to Congress. DOD reported on March 23, 2004, and provided the certifications. In this report GAO evaluates (1) DOD's responsiveness to legislative requirements; (2) the force structure plan, infrastructure inventory, and selection criteria; (3) other key issues included in DOD's report; and (4) DOD's certification regarding the need for an additional BRAC round. DOD's report to Congress generally addressed all legislative reporting requirements in section 2912 of the Defense Base Realignment and Closure Act of 1990, as amended, and separately complied with requirements under Section 2913 in adopting selection criteria to guide BRAC decision making. The degree of coverage on some reporting requirements was limited to avoid prejudging the ongoing analytical process for the 2005 round. As directed, GAO analyzed DOD's worldwide installation inventory, force structure plan, and selection criteria. While all three are important in setting a framework for the BRAC process, the latter two figure prominently in guiding DOD's analyses for the 2005 round. The unclassified portion of the 20-year force structure plan, extending through 2009, provides a macro-level focus (e.g., number of Army divisions), and reflects limited changes across the military services, even though the services have initiatives under way that could affect future force structure and infrastructure requirements. Today's security environment is evolving, as are force structure requirements along with technology advancements, and defense transformation efforts. The department must consider these factors in its BRAC analyses with appropriate allowances for future uncertainties. DOD's selection criteria closely parallel criteria used in previous rounds, while incorporating the provisions required by legislation authorizing the 2005 round. The analytical sufficiency of the criteria will best be assessed through their application in the ongoing BRAC process. GAO addressed other BRAC-related issues such as excess defense infrastructure capacity and BRAC savings because of their importance to DOD's certification of need for the 2005 BRAC round. DOD's excess capacity analysis, completed for the 2004 report, has some limitations that could result in either overstating or understating excess capacity across various functional areas, and make it difficult to project a total amount of excess capacity across DOD. While the analysis gives some indications of excess capacity within the department, the issue warrants a more complete assessment in the BRAC process. That process will also consider joint base use with the potential for better identifying excess capacity. DOD's historical financial data suggest that, assuming conditions similar to those in the 1993 and 1995 rounds, each of the military departments could achieve annual net savings by 2011, as stipulated by the mandate. While the potential exists for substantial savings from the upcoming round, it is difficult to conclusively project the expected magnitude of the savings because there are too many unknowns at this time. Additionally, improvements are needed in DOD's accounting for savings after BRAC decisions are made. GAO found no basis to question DOD's certification of the need for an additional BRAC round. While clear limitations exist in DOD's assessment of excess capacity, it does point to some areas that warrant additional analysis-and the current BRAC process is an appropriate forum for doing so.
Employer-sponsored pension plans, in combination with Social Security and personal savings, provide millions of retirees and their families with retirement income. As we reported in our October 1996 report, most employers that sponsor pension plans provide benefits using a DC plan.For a DC plan, the employer establishes an individual account for each eligible employee and generally promises to make a specified contribution to that account each year. Employee contributions are also often allowed or required and can be made on either a pretax or after-tax basis. Pretax contributions are not taxed in the year they are earned, but rather, they are taxed when withdrawn from the employee’s account. After-tax contributions are taxable in the year that they are earned as part of the employee’s annual income. Employers can make “matching” contributions, which are made only if employees also contribute to their accounts, and/or “nonmatching” contributions that are made regardless of whether or not employees contribute to their accounts. The employee’s retirement benefits depend on the total of employer and employee contributions to the account as well as the account’s investment gains and losses. In the early 1980s, Congress began to consider a new retirement system for federal civilian employees that would be more like private sector retirement systems and include a DC plan component. As a result, in 1986 the Federal Employees’ Retirement System Act was enacted, which closed the Civil Service Retirement System (CSRS) to new entrants and established FERS for employees generally hired after December 31, 1983. FERS is a three-tiered program that includes a basic annuity in addition to the defined contribution TSP and Social Security. Although FERS provides an annuity in addition to Social Security and DC pension benefits, many financial planners believe that under current market conditions income from participant and government contributions to TSP alone could generate 50 percent or more of the retirement income available to most FERS participants. Thus, it may be useful for policymakers to know how the features of TSP compare with the features of private sector DC plans. In comparing the features of private and public sector pension plans, it is important to consider key differences between private and public employers. Notably, private sector employers can deduct the cost of providing pension benefits from their taxable revenues. To qualify for these tax advantages, however, private employers must be in compliance with complex and frequently changing laws and regulations. Public employers need not comply with all of the rules that private employers face in designing and modifying their pension systems. Public sector pension benefits must be legislated, and changes to retirement programs for public employees involve political as well as business, financial, and human resource management issues. Notwithstanding the different environments in which private and public pensions evolve, it is also important to recognize that all employers that sponsor pension and other employee benefit programs do so for the same underlying business and financial reasons, which include to (1) attract and maintain an effective workforce in a competitive marketplace, (2) motivate employees to work towards meeting their employers’ goals, and (3) manage the transition of older employees from work to retirement. To provide the requested information on patterns in plans’ features, we reviewed SPDs—documents that all private employers were to file with DOL for each pension plan they sponsored—for a sample of private sector employers with 100 or more employees that sponsored only single-employer DC plans to supplement their employees’ Social Security retirement benefits. We stratified our sample by employer size using 3 groups—employers with 100 to 999 employees, 1,000 to 9,999 employees, and 10,000 or more employees. Because of the larger sampling error associated with the first two strata, we reported on two groups—all the employers in our sample and the subset of larger employers with 10,000 or more employees, the one stratum for which our sample included the entire population of such employers. Because DOL officials told us that the Department would not be able to provide SPDs for all the employers in our sample, we requested SPDs from DOL and directly from the employers. As a result, we were able to obtain and analyze 281 SPDs—67 percent of the 419 employers in our sample. In considering the representativeness of the sample of employers for which we had obtained SPDs, we found that the 281 employers were generally comparable to our universe of 3,297 employers, in terms of employer size, industry type, and geographic region. We also analyzed pension plan information that was available from DOL’s research database from which we had selected our sample of employers to provide additional information on the plans’ features. To provide information on TSP, we reviewed summary documents published by the Federal Retirement Thrift Board. As agreed with the Subcommittee, we limited the scope of our analyses involving SPDs to those employers with 100 or more employees. Also as agreed, our review included the largest primary plan offered by private employers that sponsored only DC plans in 1993, the most recent data available at the time of our review. We included only single-employer plans in our analyses, because the research database did not identify all of the employers associated with each multiemployer plan. Unless specifically noted, the estimates presented in this report are generalizable to the population of employers with 100 or more employees that sponsored only single-employer DC plans in 1993 with a sampling error of no more than 10 percent at the 95-percent confidence level. Also, as agreed with the Subcommittee, we did not independently verify the accuracy of the information (1) described in the SPDs or (2) contained in the DOL research database. Moreover, we could not confirm that the SPDs provided by DOL represented the most up-to-date information for each DC plan in our sample. Lastly, due to differences such as those described in the background section of this report, pension plan experiences in the private sector may not be applicable to the federal government. To provide information on the use of multiple plans, we used the DOL research database to identify the number of private employers with two or more employees that sponsored more than one DC plan to provide benefits for the same groups of employees in 1993. We did not independently verify DOL’s criteria for identifying plans in their research database as primary versus supplementary. To obtain insights on what factors employers may consider in deciding whether to sponsor multiple plans, we reviewed retirement-related literature and consulted with pension experts, which we selected on the basis of prior work we had done on private sector pensions. Appendixes I and II provide more detailed information on our objectives, scope, and methodology and the results obtained, respectively. We requested comments on a draft of this report from the Secretary of Labor. These comments are discussed at the end of this letter. We did our review in Washington, D.C., from October 1996 to July 1997 in accordance with generally accepted government auditing standards. Employers that sponsor DC plans generally establish certain minimum age and/or service requirements that employees must meet before they are allowed to participate in these plans. These eligibility requirements allow employers to reduce the administrative costs associated with establishing individual accounts for employees. Flexibility to establish minimum participation requirements may be especially useful to employers facing high turnover rates. Such requirements allow employers to use their resources to benefit employees who are more likely to remain with the employer for the long term. Under ERISA, as amended, employers cannot require employees to be over age 21 or to have completed more than 1 year of service with the employer. However, an exception applies to plans where participants immediately own all employer contributions made to individual accounts—employers may require that employees complete 2 years of service to be eligible to participate in these plans. Of the 3,297 employers with 100 or more employees that sponsored only single-employer DC plans in 1993, 51 percent (1,673 employers) reported using some combination of age and length of service to determine when an employee was eligible to participate in the plan. The most common combination, which was used by 73 percent of these 1,673 employers, was that an employee must be age 21 with 1 year of service—the legal limit. For the 100 larger employers, 55 percent used only length of service to determine eligibility (with most requiring 1 year of service), while 28 percent used a combination of age and service (with 93 percent requiring an age of 21 and 1 year of service). Figure 1 shows the distribution of employers according to the type of eligibility requirement used. Although the summary plan documents did not provide information on the rate of participation among covered employees, we were able to determine the percentage of employees with active accounts in 1993 for 2,127 (or 65 percent) of the employers in our review using DOL’s research database. For these employers, 74 percent had a participation rate of 76 to 100 percent, 18 percent had 51 to 75 percent participation, 4 percent had 26 to 50 percent participation, and 4 percent had 25 percent participation or less. We could determine the participation rate for 72 of the larger employers. These employers showed a considerably lower rate of participation—15 percent had a participation rate of 76 to 100 percent, 24 percent had 51 to 75 percent participation, 15 percent had 26 to 50 percent participation, and 47 percent had 25 percent participation or less. A high or low rate of participation may be a reflection of whether or not employers contributed to their plans. According to experts with whom we consulted, employers that reported 100 percent participation generally made contributions automatically to participant accounts. Newly hired federal employees covered by FERS are eligible to participate in TSP during the second open season after they are hired. Two open seasons are held each year (May 15 to July 31 and November 15 to January 31); thus, the minimum service required to participate ranges from 6 to 12 months. The government establishes and makes automatic contributions to accounts for all eligible employees covered by FERS. As of March 1997, 83 percent of eligible FERS employees made contributions to their accounts. DC plans are funded by means of employer and/or participant contributions. Employer contributions can be based on the amount a participant contributes (i.e., matching contributions), on some other criteria unrelated to participant contributions (i.e., nonmatching contributions), or on some combination of the two. For example, an employer could make matching contributions by providing $1 for each dollar a participant contributes up to a specified maximum. Alternatively, an employer could make nonmatching contributions determined on the basis of annual profitability or a specified percentage of participant compensation. In addition to employer matching and nonmatching contributions, participants may be allowed or required to contribute to their DC plan. Plans can provide for participant contributions to be made on a pretax or after-tax basis or on some combination of the two. For pretax contributions, employers generally reduce a participant’s salary by an agreed upon amount and contribute these funds directly to the participant’s DC account, thereby allowing the participant to defer paying income taxes on this portion of their salary until the funds are withdrawn from the account, presumably at or during retirement. Regardless of the type(s) of contributions employers and/or participants make to the plan, each employer must ensure that total contributions to participant accounts do not exceed certain legal limits set by ERISA, as amended. Specifically, the annual dollar contribution limit to a DC participant account is the lesser of $30,000, or 25 percent of participant compensation. Moreover, participant pretax contributions are limited to $9,500 per year and participant after-tax contributions may be limited to allow employers to satisfy certain nondiscrimination rules. Contributions that are calculated as a percentage of participant compensation are limited to including a maximum annual compensation of $160,000. Each of the above limits is indexed to the consumer price index to adjust for changes in the cost of living over time. Of the 3,297 employers with 100 or more employees that sponsored only single-employer DC plans in 1993, 97 percent reported providing for matching and/or nonmatching contributions to the plan, and 1 percent funded the plan solely on the basis of participant contributions. For the 100 larger employers, 94 percent provided for employer contributions, and 6 percent provided only for participant contributions to the plan. Thus, most employers that sponsor only DC plans contribute towards their employees’ retirement benefits rather than require that their employees bear the entire cost. As shown in figure 2, 85 percent of the 3,297 employers provided for employer nonmatching, 43 percent for employer matching, 50 percent for participant pretax, and 16 percent for participant after-tax contributions. For the 100 larger employers, 68 percent of employers provided for employer nonmatching, 54 percent for employer matching, 72 percent for participant pretax, and 11 percent for participant after-tax contributions. The two most common arrangements for employer and participant contributions were to provide for (1) employer nonmatching contributions and no participant contributions (41 percent) or (2) employer matching and nonmatching contributions plus participant pretax contributions (25 percent). None of the other arrangements were used by more than 8 percent of the employers that sponsored only single-employer DC plans. The subset of larger employers were more likely to provide for participant pretax contributions to their plans than the overall group of employers in our review—72 percent versus 50 percent, respectively. The most common contribution arrangements were also somewhat different for these employers. Specifically, 26 percent of the larger employers provided for matching, nonmatching, and participant pretax contributions—the same arrangement provided under TSP for those federal employees covered by FERS; 23 percent provided for nonmatching contributions only; 18 percent provided for matching and participant pretax contributions; and 11 percent provided for nonmatching and participant pretax contributions. None of the other combinations of contributions were used by more than 6 percent of the larger employers. Table II.2 (see app. II) provides more details on the combinations of contributions that the employers specified in their SPDs. The employers that provided for matching contributions to their plans used a wide variety of matching arrangements. For example, employers applied different limits on what level of participant contributions would be eligible to receive a match, matched different amounts of each dollar contributed by participants, and sometimes provided different matching contributions for different levels or types of participant contributions. Of the 1,410 employers that provided for matching contributions, 815 employers (and 32 of the subset of larger employers) specified the level of participant contributions that they were willing to match and the amount of matching contributions that would be provided for each participant dollar contributed. Of these employers, 60 percent offered to match participant contributions that did not exceed 5 percent of the participants’ compensation, and 40 percent offered to match participant contributions that did not exceed 6 percent or more of compensation. Fifty-eight percent of the employers matched 50 cents or less for each eligible participant dollar contributed, and 42 percent matched more than 50 cents for each eligible participant dollar contributed. How an employer combined these two factors—the amount of participant contributions eligible to be matched and the level of matching contributions provided for each eligible participant dollar contributed—determined the maximum potential amount of an employer’s matching contributions. Our analyses showed that employer matching practices were not related to participant contribution eligibility practices. That is, employers did not provide a high rate of matching contributions only when the percentage of eligible participant contributions was low, nor did they provide a low rate of matching contributions only when the percentage of eligible participant contributions was high. Of the larger employers, 45 percent offered to match participant contributions that represented up to 5 percent of their compensation, and 55 percent offered to match participant contributions that represented 6 percent or more of compensation. Further, 62 percent of the larger employers matched $1 or more for each eligible participant dollar. In comparison, TSP provides for government matching contributions on participant contributions not exceeding 5 percent of compensation, and the government match equates to 80 cents for each eligible dollar of participant contribution, assuming participants contribute at least 5 percent of their salary. For the 2,786 employers that provided for nonmatching contributions to their plans, 45 percent specified that the dollar amount to be contributed to participant accounts would be determined on the basis of some percentage of annual profits, while another 45 percent specified that the dollar amount would be determined on the basis of some percentage of participant compensation. Seventy-nine percent of these employers did not specify in their SPDs the exact percentage that would be used to determine nonmatching contributions. Similarly, of the 68 larger employers that provided for nonmatching contributions to their plans, 45 percent determined nonmatching contributions on the basis of profits, and 36 percent, on the basis of participant compensation. Sixty-six percent of these larger employers did not specify the exact percentages used to determine nonmatching contributions. TSP provides for government nonmatching contributions equal to 1 percent of employee compensation for those employees covered by FERS. Of the 3,297 employers that sponsored only single-employer DC plans to provide pensions for their employees, 1,862 (or 56 percent) of these employers allowed participants to make contributions to their plans. Of these 1,862 employers, 71 percent allowed participants to contribute on a pretax basis, 18 percent on either a pretax and/or after-tax basis, and 11 percent on an after-tax basis. Seventy-three of the 100 larger employers provided for participant contributions—85 percent of these 73 employers allowed pretax contributions, and 15 percent allowed both pretax and after-tax contributions. Of the 1,656 employers that provided for pretax contributions (and thus allowed participants to shelter a portion of their income from current taxation as well as accumulate savings for retirement), 51 percent allowed participants to contribute more than 10 percent of their annual compensation to the plan (not to exceed the Internal Revenue Service limit). Similarly, 60 percent of the 73 larger employers that provided for pretax contributions allowed participants to contribute more than 10 percent of their annual compensation to the plan. Federal employees covered by FERS are allowed to contribute up to 10 percent of their basic pay on a pretax basis to TSP, up to the current legal maximum of $9,500.Figure 3 shows the maximum participant pretax contributions allowed for plans sponsored by employers that only offer single-employer DC plans. Of the 3,297 employers included in our review, 883 (or 27 percent) of the employers included enough information in their SPDs to allow us to calculate the maximum potential cost, or liability, of making employer contributions—matching, nonmatching, or both—to their plans. Thus, our results regarding employer liability for contributions are not generalizable to all the employers included in our review. Of the 883 employers, 59 percent had a liability of up to 5 percent of participant compensation, and 41 percent had a liability of 6 percent or more of participant compensation (with the greatest liability being 18 percent). An employer’s actual liability for contributions may be less than the potential maximum in any given year for a variety of reasons, including when participants do not contribute enough to maximize employer matching contributions or when employers elect to contribute less than the maximum allowable nonmatching contributions in any given year. Moreover, an employer’s effective cost of contributions cannot be determined without knowing what portion of those contributions may have been deducted from the employer’s corporate taxes in any given year. We could determine the maximum employer liability for contributions for 38 of the 100 larger employers. Of these 38 employers, 76 percent had a liability of up to 5 percent of participant compensation, and 24 percent had a liability of 6 percent or more of participant compensation. In comparison, the government’s maximum potential liability for contributions for employees covered by FERS is 5 percent of compensation—consisting of up to 4 percent in matching contributions plus 1 percent in nonmatching contributions. In 1996, government agencies contributed about $2 billion to FERS employees’ TSP accounts. Participants of DC plans accrue the right to pension benefits, or become “vested,” by meeting certain requirements established by employers. By law, participants are always fully vested in any pretax or after-tax contributions that they make to their accounts, whether these contributions are voluntary or mandatory. Thus, employer vesting requirements apply only to the participant’s right to employer-matching and/or nonmatching contributions. ERISA, as amended, requires that participants become fully vested in 100 percent of employer contributions to their accounts (1) within 5 years if the employer uses “cliff” vesting, where no rights to benefits are earned in prior years or (2) within 7 years if the employer uses “graduated” vesting, where rights to benefits are earned gradually over the period starting no later than the third year. Employers may also use more liberal vesting requirements if they choose. For example, immediate vesting occurs when an employer sets no vesting requirements. Employers can use vesting schedules to reduce the cost of providing pension benefits to employees who do not remain with an employer for at least 5 to 7 years. Of the 1,410 employers with 100 or more employees that sponsored only single-employer DC plans in 1993 and made matching contributions to their plans, 1,374 employers specified vesting requirements in their SPDs. Of these 1,374 employers, 56 percent reported using graduated vesting; 35 percent, immediate vesting; and 9 percent, cliff vesting. Larger employers were more likely to use immediate vesting—52 percent used immediate vesting, 26 percent used graduated vesting, and 22 percent used cliff vesting. Of the 2,786 employers that made nonmatching contributions to their plans, 2,755 employers specified vesting requirements in their SPDs. Of these 2,755 employers, 70 percent used graduated vesting, about 17 percent used cliff vesting, and 13 percent used immediate vesting. For the larger employers, about 51 percent used graduated vesting, 26 percent used cliff vesting, and 23 percent used immediate vesting. Regardless of the type of vesting schedule used, employers generally used more liberal schedules for matching contributions compared with nonmatching contributions. Forty-four percent of the 1,410 employers that provided for matching contributions specified that participants became fully vested in these contributions within 4 years, while 22 percent of the 2,786 employers that provided for nonmatching contributions specified that participants became fully vested in these contributions within the same time period. Similarly, 57 percent of the subset of larger employers provided for employees to become fully vested in any matching contributions within 4 years, compared with 25 percent for nonmatching contributions within the same time period. Federal employees covered by FERS are immediately vested in any matching contributions, and most become fully vested in the automatic 1 percent nonmatching contributions after completing 3 years of service. Figure 4 shows the length of time to full vesting for employer-matching and nonmatching contributions for plans sponsored by employers that only offer single-employer DC plans. Using DOL’s research database, we were able to determine the proportion of participants who were fully vested in 1993 for 2,825 (or 86 percent) of the employers in our review. For these employers, more than half of the current employees were fully vested for 62 percent of the plans, while half or less of the employees were fully vested for 38 percent of the plans. For the 83 larger employers for which we were able to determine the percentage of fully vested participants, the proportion of plans with more than half and half or less of participants fully vested were virtually the same as for all employers—63 percent and 37 percent of employers, respectively. By law, employers that sponsor DC plans can either invest employer and participant contributions made to the plan, or they can allow participants to direct the investment of their own accounts. Employers that provide for “participant-directed” accounts in their plans must meet certain DOL regulations to insulate themselves from being liable for any losses that result from a participant’s exercise of investment control. Specifically, employers that allow participants to direct their own accounts must offer a broad range of investment alternatives, consisting of at least three diversified investment alternatives, each having different risk and return characteristics. Moreover, participants must be allowed to change their investment decisions at least once in every 3-month period. Employers must also provide participants with descriptive information on each investment option, including risk and return characteristics, transaction fees and expenses, and copies of prospectuses. A considerable portion of the employers with 100 or more employees that sponsored only single-employer DC plans in 1993 did not specify in their SPDs whether participants could direct the investment of contributions made to their accounts, as shown in figure 5. As also shown, the larger employers were more likely to specify who could direct the investment of each type of contribution. For those employers that did report on who could direct the investment of account assets, participants were more frequently allowed to direct the investment of all but nonmatching contributions made to their accounts. The larger employers were more likely to allow participants to direct the investment of all types of contributions made to their accounts. Of the employers that specified participants could direct the investment of their accounts and specified the number of investment options in their SPDs, the majority provided participants with at least four investment options from which to choose. About half of the employers did not list the specific investment choices available to participants in their SPDs; however, investments commonly listed included employer stock, stock mutual funds, bond mutual funds, balanced funds consisting of both stocks and bonds, guaranteed investment contracts providing a fixed interest rate through an insurance company, U.S. government securities, and money market investments consisting of short-term securities. Although we could not determine the proportion of participants who actually selected each available investment option, we were able to determine the proportion of total plan assets invested in (1) stocks and (2) bonds in 1993 for approximately three-fourths of the employers in our review using DOL’s research database. For these employers, 81 percent had 25 percent or less of their total plan assets invested in stocks (not including an employer’s own company stock) and 95 percent had 25 percent or less of their total plan assets invested in bonds. For the larger employers, 93 percent had 25 percent or less of their total plan assets invested in stocks, and 100 percent had 25 percent or less invested in bonds. We were unable to determine what proportion of these investments resulted from employer contributions, participant contributions, and market gains or losses over time. Regardless of how plan assets are invested, employers must receive, hold, and transmit plan assets up to the time the assets are withdrawn by plan participants. Employers generally do so using a trust fund, an insurance account, or a combination of the two. Using the DOL research database, we were able to determine how 2,831 (or 86 percent) of the employers in our review managed their participant accounts in 1993—70 percent of these employers used trust funds, 20 percent used a combination of trust funds and insurance accounts, and 10 percent used insurance accounts. The larger employers also used these same methods in approximately the same proportions. Federal employees who participate in TSP may direct the investment of their accounts using three investment funds—the “G fund” that is invested in short-term nonmarketable U.S. Treasury securities, the “C fund” that is invested in the stock of the same 500 companies selected by the Standard & Poor’s Corporation for its S&P 500 index, and the “F fund” that is invested in U.S. government, corporate, and mortgage-backed securities. In 2 to 3 years, participants will have two new funds—a small company stock fund and an international fund—which will bring the number of investment options up to five funds. Participant accounts are managed by the Federal Retirement Thrift Investment Board, an independent government agency tasked with managing TSP prudently and solely in the interest of participants and their beneficiaries. Employers that sponsor DC plans can permit participants to access some portion of their account balances while they are still actively employed using loan, voluntary withdrawal, and/or hardship withdrawal provisions. Employers can include loan provisions that permit participants to borrow a portion of their vested account balances and repay this amount in level payments over a specified number of years at a specified interest rate. ERISA, as amended, generally limits a participant’s outstanding loan balance to the lesser of $50,000 or 50 percent of the participant’s vested account balance. Employers can also allow participants to make voluntary withdrawals from their after-tax contributions and/or employer contributions (and the earnings on these contributions) without requiring that the funds be repaid; however, a 10 percent tax penalty applies to most of these distributions if they are made before age 59½. Some employers specify that participants will face additional penalties for making a voluntary withdrawal, such as losing the right to make contributions to the plan for 1 year. An employer can allow participants to make a hardship withdrawal from their pretax contributions (but not the earnings on those contributions) to meet immediate and heavy financial needs for which no other resources are available. Needs that meet the legal definition of a hardship include medical expenses, purchase of a principal residence, tuition for postsecondary education, and prevention of eviction from, or foreclosure on, a principal residence. Nearly two-thirds of the 3,297 employers reported providing plan participants access to a portion of their account balances prior to separation from employment. Figure 6 shows the percentage of employers with 100 or more employees that sponsored only single-employer DC plans in 1993 and provided for each type of participant access to their accounts. Of the 3,297 employers included in our review, 46 percent specified that participants could borrow from their accounts; 43 percent provided for hardship withdrawals, and 21 percent specified that participants could make voluntary withdrawals from their accounts. Of the larger employers, 57 percent provided for loans, 58 percent provided for hardship withdrawals, and 15 percent provided for voluntary withdrawals. For those 1,530 employers that included a loan feature in their DC plans, 11 percent reported allowing participants to borrow from their accounts for any reason, while the remaining employers allowed participants to (1) borrow from their accounts only for specified purposes or (2) submit an application to the employer for approval. ERISA, as amended, allows employers to set a minimum loan amount of up to $1,000; however, 47 percent of the employers either did not specify a minimum loan amount in their SPDs or specified a minimum amount that was less than $1,000. Moreover, only 27 percent of the employers specified that participants were limited to one outstanding loan at any given time. The vast majority of employers (98 percent) allowed participants to borrow up to the legal limit of $50,000 (or 50 percent of their vested account balance, if less). Although the 57 larger employers that provided for loans were more likely to allow participants to borrow from their accounts for any reason (35 percent versus 11 percent for all employers), the other components of their loan programs were generally comparable to the group of employers as a whole. Using DOL’s research database, we were able to determine the proportion of total plan assets represented by outstanding participant loans for 787 (or 51 percent) of the employers that allowed participants to borrow from their accounts. For 88 percent of these employers, participant loans represented 5 percent or less of the plan’s total assets, which suggests that the majority of plan contributions were being held and invested for retirement rather than being tapped by participants for preretirement spending. An even greater percentage—94 percent—of the 32 larger employers for which we could determine this information had outstanding participant loans of 5 percent or less of total plan assets. For those 1,419 employers that allowed participants to make hardship withdrawals from their accounts, 86 percent specified neither limits on the number of hardship withdrawals that could be made in 1 year nor a required minimum amount that must be withdrawn. Similarly, 79 percent of the 58 larger employers that provided for hardship withdrawals did not specify such restrictions in their summary plan documents. However, 77 percent of the employers (and 79 percent of the larger employers) specified some form of penalty for participants who made a hardship withdrawal from their accounts—the most common penalty being the suspension of a participant’s right to contribute to the plan for some period of time. Thus, although participants may not be able to control their need to make a hardship withdrawal from their accounts in all circumstances, employer penalties may discourage participants from tapping their accounts prior to retirement. For those 709 employers that allowed participants to make voluntary withdrawals from their accounts, 20 percent limited participants to one such withdrawal per year and 13 percent specified that participants must withdraw some minimum amount ranging from $100 to $500. Moreover, one-third of these employers required participants to meet certain age and service requirements before they could make a voluntary withdrawal from their accounts. For example, the most common requirement was that participants must be at least age 55 and have 10 years of service with the employer. Twenty-one percent of the employers penalized participants who made voluntary withdrawals from their accounts, for example, by suspending a participant’s right to contribute to the plan for 12 months. For plans that provided for employer matching contributions, such a penalty also suspended a participant’s ability to receive matching contributions over the same time period. The 15 larger employers that provided for voluntary withdrawals were more likely to set a minimum amount that participants must withdraw; however, other limits and restrictions were generally comparable to those provided for by all the employers included in our review. All of the above limits, restrictions, and penalties can reduce an employer’s administrative burden for allowing participants to make voluntary withdrawals from their accounts as well as encourage participants to preserve their accounts for use in retirement. The federal TSP includes a loan program, which allows participants to borrow from their own contributions (and earnings on those contributions) for any reason. The federal program sets the minimum loan amount at $1,000, allows two loans outstanding at any one time, and is limited to the same $50,000 limit as private sector plans. The federal program also provides for hardship withdrawals, and participants who are at least age 59½ may make a one-time voluntary withdrawal from their accounts while they are still in federal service. Employers that sponsor DC plans can allow participants to receive their pension benefits from their individual accounts in a variety of ways at retirement—generally as a lump-sum distribution or an annuity. For a lump-sum distribution, employers disburse a participant’s entire account within 1 taxable year. For an annuity, the participant receives regular payments for the participant’s remaining lifetime. Employers that offer annuities must also offer a joint and survivor annuity that provides a surviving spouse with at least one-half the amount of the participant’s benefits. Employers generally pay for the additional survivor benefits by reducing the participant’s monthly benefit. In addition to a lump sum or annuity, some employers allow participants to withdraw their accounts using installment payments that deplete the account over a period of time that can be specified by the employer or the participant. Employers are currently required to begin disbursements from pension accounts no later than April of the year following the year participants turn age 70½. Of the 3,297 employers with 100 or more employees that sponsored only single-employer DC plans in 1993, 92 percent reported providing for lump-sum distributions, 67 percent for installment payments, and 47 percent for annuities when participants separated from the employer at retirement. Of the 100 larger employers, 97 percent provided for lump-sum distributions, 52 percent for installment payments, and 32 percent for annuities when participants retired. Although the majority of employers provided more than one withdrawal option in their plans, less than one-third of the plans specified that participants could opt to withdraw their accounts using a combination of withdrawal options—for example, by taking a portion of their account as a lump-sum withdrawal and purchasing an annuity with the remainder of their account balance. Limiting participants to one withdrawal option may allow employers to control their administrative costs. Figure 7 shows the combinations of withdrawal options provided for by employers that sponsor only single-employer DC plans. For those participants who separate from an employer for reasons other than retirement, employers generally provided for the same withdrawal options as those available at retirement. However, the majority of the employers allowed participants to defer making a withdrawal from their accounts until a later date, thus providing them an option to avoid the 10 percent tax penalty assessed on pension assets withdrawn prior to age 59½. The ability to maintain vested account balances with a prior employer or to “roll over” funds to either a special individual retirement account or a new employer’s retirement plan reflects the portability of DC plans. ERISA, as amended, allows employers to unilaterally cash out a participant account if the balance is $5,000 or less—72 percent of the employers (and 80 percent of the larger employers) specified that participants with small account balances would be required to receive a lump-sum distribution of their accounts. Under TSP, participants can choose to withdraw their accounts as a lump-sum distribution, an annuity, or regular monthly installment payments. For married participants who withdraw their accounts as a lump sum or monthly installment payments, their spouses must first waive their rights to a 50-percent joint life annuity. Participants with vested account balances of $3,500 or less are to be automatically cashed out unless participants select another withdrawal option or elect to leave the funds in the plan. Participants may defer receiving any immediate withdrawals from their accounts until April of the year following the year they turn 70½—the legal limit for pension deferral. According to DOL’s research database, in 1993, 12 percent of the approximately 490,000 employers that sponsored only single-employer DC plans covering 2 or more participants sponsored more than one DC plan for the same group of employees. An employer may sponsor multiple plans to provide primary benefits to different groups of employees, primary and supplementary benefits to the same group of employees, or a combination of both. It is important to note that employers that sponsor supplementary pension plans do not necessarily offer more comprehensive or “generous” retirement benefits than employers that offer only a primary plan. Administrative costs are generally insignificant compared with the cost of employer contributions, and pension programs with either one or multiple plans can be designed to result in the same total cost to an employer. The proportion of employers that sponsored multiple plans covering the same groups of employees was fairly consistent across different employer size categories—12 percent of employers with fewer than 100 employees sponsored supplementary plans, as compared with 9 percent of employers with 100 to 9,999 employees and 14 percent of employers with 10,000 or more employees. One expert with whom we consulted suggested that a greater proportion of larger employers may sponsor supplementary plans, because these employers are more likely to have initially sponsored plans that were later supplemented with a 401(k) plan to compete with other employers. This same expert also noted that smaller employers were better able to cope with managing multiple plans during the period before computer technology was readily available. The proportion of employers that sponsored multiple plans covering the same groups of employees were also fairly evenly distributed across different industry categories, although employers in the services industry were about 30 percent more likely to sponsor supplementary plans, on average. Employers in the mining, communications, and utilities industries as well as tax-exempt employers were the least likely to sponsor supplementary plans. Appendix III provides more detailed information on the number of employers that sponsored primary and supplementary plans in 1993, stratified by employer size and industry group. Pension experts with whom we consulted and pension-related literature suggested various factors that may explain why some employers might choose to offer more than one pension plan to their employees. According to these sources, employers that sponsor pension plans are primarily concerned with controlling benefit costs, maximizing the federal tax incentives for providing pensions, and meeting the legal requirements of ERISA, as amended. Employers must also design their compensation and benefit packages to support their overall business and financial goals. For example, employers may use multiple pension plans to (1) recruit and retain certain groups of employees while also satisfying longer-tenured employees, (2) enhance productivity and employee morale, (3) reduce pension liabilities by shifting a portion of pension contributions to employees, and/or (4) link compensation to performance for higher paid employees, as described in more detail below. These sources also said that computer technology has made it possible for more employers to manage multiple pension plans than would have been practical using only paper records. As a result, employers may choose to sponsor multiple plans that provide different combinations of pay and benefits to different groups of employees. According to the experts, employers must meet industry benefit standards to remain competitive in attracting new employees and encouraging those employees to stay with the company. By providing more than one pension plan, employers can encourage career employment while meeting the needs of younger, more mobile, workers with desired skills. For example, younger and more mobile workers may demand a 401(k) plan in their benefits packages to allow them to build up retirement benefits that are fully portable should they change jobs. On the other hand, a basic annuity (or “defined benefit”) plan may provide better benefits for employees who remain with an employer for the long term. By offering a pension program consisting of both these types of plans, employers can satisfy the needs of both groups of employees. Employers may sponsor employee stock option plans (ESOP) and profit sharing plans for a variety of reasons, only one of which is to provide employees with primary or supplementary retirement benefits. These plans can enhance employee productivity by increasing employee identification with the company and providing a more direct incentive for improved job performance. These plans also give employers the option of whether to contribute to the plan in any given year, depending on company profitability. For an ESOP, other benefits can include creating a more liquid market for closely held stock, raising new capital, obtaining financing at below-market interest rates, and sheltering profits from corporate income taxes. New retirement programs that include one or more DC plans may reflect a cultural shift away from benefits paid solely by the employer towards a partnership relationship between the employer and employees. By offering a combination of pension plans, employers allow employees to influence their own level of benefits according to their participation and investment choices. Employers that offer a combination of a basic annuity and a DC plan can guarantee a certain minimum level of retirement benefits, while employees can choose the extent to which they participate in supplementary plans to increase their potential benefits upon retirement. Employers can increase the benefits available to their senior executives and other highly paid employees by sponsoring supplementary plans that are not covered by ERISA and benefit only selected groups of employees. Although these “nonqualified” plans are not accorded preferential tax treatment, and therefore do not provide employers with tax benefits, employers can use them to motivate executives by linking the amount of benefits to some measurable level of performance, such as total sales. Nonqualified plans also allow employers to provide higher-paid employees with the same retirement income replacement rate as lower-paid employees, while still complying with ERISA’s nondiscrimination regulations and annual limits on employee contributions to the employer’s other qualified pension plans. We requested comments on a draft of this report from the Secretary of Labor. In a letter dated October 28, 1997, the Assistant Secretary of Labor for Pension and Welfare Benefits provided Labor’s comments. (See app. IV.) DOL provided no substantive comments; however they did make one technical comment regarding the fact that nonqualified plans are not accorded preferential tax treatment. We clarified the report to reflect this comment. As agreed with the Subcommittee, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. We will then send copies of this report to the Ranking Minority Member of the Subcommittee, the Chairman and Ranking Minority Member of the Senate Governmental Affairs Committee, the Secretary of Labor, and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix V. If you have any questions, please call me at (202) 512-8676. The Chairman, Subcommittee on Civil Service, House Committee on Government Reform and Oversight, asked us to provide information on the use of defined contribution (DC) plans in the private sector. He said that such information would assist congressional decisionmakers as they consider whether to design a retirement system for new federal hires. Among the information requested was an analysis of the features of private sector DC plans. This review was undertaken in response to that part of the request. The objective of our review was to determine, for employers that sponsored only DC plans, the eligibility requirements for employee participation, arrangements for employer and participant contributions, eligibility requirements for employee rights to accrued benefits, employee investment options, loan and other provisions for participant access to plan assets while still employed, and options for withdrawal of benefits upon separation or retirement. To address an additional interest of the requester, we also determined the number of employers that sponsored more than one DC plan to provide retirement benefits to the same groups of employees, and their potential reasons for doing so. To accomplish the first objective, we reviewed Summary Plan Descriptions (SPD)—documents describing the terms and conditions of pension plans that all private employers were to file with the Department of Labor (DOL) for each pension plan they sponsored—for a random stratified sample of private sector employers that sponsored only DC plans to supplement their employees’ Social Security. We also included additional plan information that was available from the DOL research database, which we used to draw our sample of employers. During our early design work, DOL officials told us that the Department did not monitor or enforce employer compliance with the SPD filing requirement because of limited resources and other competing priorities.For this reason, we identified a preliminary sample of employers and mailed written requests for the employers’ SPDs. On the basis of this initial request, we determined that it would be difficult to obtain SPDs from employers with less than 100 employees, because these employers had a very high nonresponse rate to our request. Moreover, DOL officials told us that smaller employers were also much less likely to file an SPD with DOL, compared with larger employers. Therefore, as agreed with the Subcommittee, we limited the review of SPDs to employers with 100 or more employees. It is important to note that the data shown in this report may reflect only part of each sampled employer’s retirement benefits program, because it does not include information on (1) additional DC plans that some employers offer or (2) Social Security benefits that most workers will qualify for upon retirement. To select our nationwide sample from employers with 100 or more employees that sponsored only single-employer DC plans, we used the 1993 research database of computerized Internal Revenue Service (IRS) Form 5500 reports maintained by the Pension and Welfare Benefits Administration of the Department of Labor (DOL)—the most recent data available when we designed our review. Under the Employee Retirement Income Security Act of 1974, private employers must annually file a separate Form 5500 report with the IRS for each of their pension plans. Each report is to contain financial, participant, and actuarial data. We did not independently verify the accuracy of the DOL research database. However, IRS edits the reports by checking addition and consistency on financial and other record items and corresponds with filers to obtain corrected data before providing the computerized data to DOL. DOL further edits the Form 5500 data to identify problems, such as truncated or incorrect entries, before constructing its research database, which consists of (1) all plans with 100 or more participants for which a Form 5500 was filed and (2) a 10-percent sample that is weighted to represent the universe of all plans with less than 100 participants. According to the DOL research database, approximately 490,000 employers with 2 or more employees sponsored only single-employer DC plans in 1993. We excluded employers that (1) had fewer than 100 employees, (2) sponsored only multiemployer DC plans, because of incomplete data,or (3) sponsored only DC plans that were either terminated or consolidated with another plan during 1993, because we wanted to review plans that had the greatest probability of still being in existence at the time we issued a report. Finally, we excluded those employers that offered only DC plans for which we could not determine the number of employees from the 5500 reports. We selected a sample of 419 employers from the remaining universe of 3,297 employers. Because we randomly selected the sample of private employers that sponsor only DC plans, the results are subject to some uncertainty or sampling error. The sampling error consists of two parts: confidence levels and ranges. The confidence level indicates the degree of confidence that can be placed in the estimates derived from the sample. The range is the upper and lower limits between which the actual universe estimates may be found. Our sample was designed so that the sampling error would not be greater than 10 percent at the 95-percent confidence level; however, where we further subdivided the sample along particular groups (e.g., employers that provided for matching contributions), the resulting number of employers was too small to meet this criteria. In the letter portion of this report, we indicate when the sampling errors are greater than 10 percent; these sampling errors are also at the 95-percent confidence level. In appendix II, which provides the detailed results of our analyses, we do not provide individual sample errors, because the number of such individual estimates would be prohibitive. We stratified our sample according to employer size using 3 categories—100 to 999; 1,000 to 9,999; and 10,000 or more employees. We included in our sample all the employers that had 10,000 or more employees, because these employers may provide a more relevant comparison with the federal government. Table I.1 shows the distribution of the employers from which we selected our sample and the 419 employers selected according to employer size. For the 419 employers in our sample, we (1) identified the unique plan number for each employer’s primary DC plan using the DOL research database and (2) sent each employer a letter requesting that an SPD for the identified plan be provided to us. For those employers that sponsored more than one such plan (each covering different groups of employees), we requested information on the employer’s largest primary plan. For 50 of the employers, our letters were returned as nondeliverable. From the remaining 369 employers, we received 138 SPDs. Forty-four employers provided us with a different plan than the one we requested; however, the plans provided appeared to be a primary DC plan and were included in our review. To obtain the 281 SPDs that we did not receive directly from employers, we requested that DOL provide us copies from its files. DOL provided 143 of the requested SPDs and told us that 138 of the SPDs were not available, because the employer never filed a copy as required by law. Overall, we were able to obtain SPDs for 281 of the 419 employers in our sample, for a response rate of 67 percent. We were unable to determine if the SPDs provided by employers or DOL reflected the most current information available. Table I.2 shows the disposition of each of the 419 employers in our sample by employer size. To examine the extent to which our results were generalizable, we compared the 281 employers for which we obtained an SPD to the universe of employers with 100 or more employees that offered only single-employer DC plans on the basis of employer size, industry type, and geographic region. The results of these analyses showed that the sample respondents were generally comparable to employers in our universe for these characteristics. To identify and summarize the general characteristics of the 281 DC plans for which we obtained an SPD, we developed a detailed data collection instrument to allow information from each SPD to be recorded in a consistent and standardized way. Each SPD was reviewed twice—the second review was completed by a more experienced analyst to provide 100 percent verification of the information collected. We did not independently verify the accuracy of the information described in the SPDs. We entered the information from the data collection instruments into a database to determine the frequency of the data elements and reviewed our results for patterns and relationships. To identify information on the federal Thrift Savings Plan (TSP), we reviewed various publications that we obtained from the Federal Retirement Thrift Investment Board. We included this information in our report to provide a general basis for comparison between the government’s existing DC plan and those plans sponsored in the private sector. To supplement the data available from the SPDs, we pulled additional plan information from DOL’s research database—the same database from which we selected our sample. We used this information to analyze the rate at which eligible employees chose to participate in the plans, the employers’ methods of managing accounts, the proportion of participants that were fully vested, and the percentage of plan assets invested in employer stocks, stocks, bonds, and outstanding participant loans. We were able to supplement only those SPDs that described plans included in the 1993 research database. To address part of the second objective, which was to identify the number of employers that sponsored more than one DC plan covering the same groups of employees, we used the 1993 DOL research database and included employers with two or more employees. To determine why employers might decide to sponsor multiple plans, the remainder of the second objective, we (1) reviewed retirement-related literature that we identified using an on-line business periodical system and (2) consulted with experts in the field of pensions. The experts with whom we consulted were Mr. Ray Schmitt, Specialist, and Ms. Carolyn Merck, Specialist, the Congressional Research Service; Mr. Dallas Salisbury, President, the Employee Benefit Research Institute; Mr. Richard R. Joss, Resource Actuary, Watson Wyatt Worldwide; and Ms. Martha Priddy Patterson, Director of Employee Benefits, Policy and Analysis, KPMG Peat Marwick. We selected these individuals because we had identified them as experts during prior work we had done on private sector pension issues. To identify whether plans in its research database provided primary versus supplementary benefits, DOL used a set of assumptions, which it validated for a small sample of plans. Employers that sponsored only one plan, by definition, sponsored a primary plan. For employers that sponsored more than one plan, DOL’s assumptions identified (1) multiple DC plans of the same type as primary plans covering different groups of employees and (2) multiple DC plans of different types as primary and supplementary plans covering the same group of employees, with the largest one being the primary plan. We did not independently verify the accuracy of DOL’s criteria for identifying the primary versus supplementary status of plans. According to DOL officials, their validation analyses indicated that their assumptions were not accurate in all cases; however, they appeared to be valid for the large majority of situations where employers sponsor more than one pension plan. We obtained and analyzed the sample of SPDs and completed the review of multiple plans between October 1996 and July 1997 in accordance with generally accepted government auditing standards. Table II.1: Number and Percent of Employers That Sponsor Only Single-Employer Primary Defined Contribution Pension Plans by Participation Requirement and Employer Size (1993) Because 47 employers with 100 or more employees were silent regarding a length-of-service requirement but explicitly specified that employees must meet a particular age requirement—we included these employers under the “none” rather than “not specified” category. Table II.3: Number and Percent of Employers That Sponsor Only Single-Employer Primary Defined Contribution Pension Plans by Basis for Making Nonmatching Contributions and Employer Size (1993) Basis for making nonmatching contributions Some percent of profits, allocated by participant compensation Some percent of profits, allocated by participant contributions Maximum employer nonmatching contributions expressed (as a percent of participant salary) Maximum participant contribution eligible for employer matching(percent of salary) Note 1: Of the 3,297 employers with 100 or more employees in our study, 1,410 employers provided matching contributions to the primary plan. We could only determine the maximum participant contributions eligible for employer matching contributions for 816 of these 1,410 employers, because the summary plan descriptions for the remaining 594 employers did not contain this information. Note 2: Due to rounding, numbers and percentages do not always add to the total. Maximum level of employer matching of eligible participant contributionsNote 1: Of the 3,297 employers with 100 or more employees in our study, 1,410 employers provided matching contributions to the primary plan. We could only determine the maximum level of employer matching of eligible participant contributions for 815 of these 1,410 employers, because the summary plan descriptions for the remaining 595 employers did not contain this information. Note 2: Due to rounding, numbers and percentages do not always add to the total. Maximum potential employer cost of contributions (percent of salary) Employer does not contribute to plan Note 1: Of the 3,297 employers with 100 or more employees in our study, 3,205 employers specified that they made matching and/or nonmatching contributions to the primary plan. Of these 3,205 employers, we could only determine the maximum potential employer cost of contributions for 883 employers, because the summary plan descriptions for the remaining 2,322 employers did not contain this information. Note 2: Due to rounding, numbers and percentages do not always add to the total. Maximum rate of participant contribution allowed (percent of salary) Table II.9: Number and Percent of Employers That Sponsor Only Single-Employer Primary Defined Contribution Pension Plans by Type of Vesting for Matching and Nonmatching Contributions and Employer Size (1993) Note 1: For defined contribution plans, the term “vesting” refers to a participant’s ownership rights to employer contributions made to his or her individual plan account (and the earnings that accrue from those contributions) even if the participant separates from the employer. Participants are always fully vested in any pretax or after-tax contributions that they make to the plan. Note 2: Due to rounding, numbers and percentages do not always add to the total. Note 1: For defined contribution plans, the term “vesting” refers to a participant’s ownership rights to employer contributions made to his or her individual plan account (and the earnings that accrue from those contributions) even if the participant separates from the employer. Participants are always fully vested in any pretax or after-tax contributions that they make to the plan. Note 2: Due to rounding, numbers and percentages do not always add to the total. Table II.12: Number and Percent of Employers That Sponsor Only Single-Employer Primary Defined Contribution Pension Plans by Type of Investment Options for Participant and Employer Contributions and Employer Size (1993) Table II.13: Number and Percent of Employers That Sponsor Only Single-Employer Primary Defined Contribution Pension Plans by Type of Participant Access to Account Assets and Employer Size (1993) Hardship withdrawal provisions allow participants to withdraw a portion of their own contributions to the plan if they suffer a financial hardship, as defined by the IRS. Voluntary withdrawal provisions allow participants to withdraw a portion of their after-tax contributions and/or employer contributions for any reason and without having to repay their accounts. Note 1: This table includes those employers that sponsored only single-employer DC pension plans covering two or more participants. Note 2: The database that we analyzed categorizes pension plans as primary or supplementary using a set of criteria established by DOL. According to these criteria, employers that sponsored only one plan, by definition, sponsored a primary plan. For employers that sponsored more than one plan, DOL’s assumptions identified (1) multiple DC plans of the same type as primary plans covering different groups of employees and (2) multiple DC plans of different types as primary and supplementary plans covering the same group of employees, with the largest one being the primary plan. Note 1: This table includes those employers that only sponsored single-employer DC pension plans. Note 2: The database that we analyzed categorizes pension plans as primary or supplementary using a set of criteria established by DOL. According to these criteria, employers that sponsored only one plan, by definition, sponsored a primary plan. For employers that sponsored more than one plan, DOL’s assumptions identified (1) multiple DC plans of the same type as primary plans covering different groups of employees, and (2) multiple DC plans of different types as primary and supplementary plans covering the same group of employees, with the largest one being the primary plan. Margaret T. Wrightson, Assistant Director, Federal Management and Workforce Issues James A. Bell, Assistant Director Jennifer S. Cruise, Evaluator-in-Charge Gregory H. Wilmoth, Senior Social Science Analyst George H. Quinn, Jr., Computer Specialist Ernestine B. Burt, Issue Area Assistant Carol B. Quick, Intern The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the general features of defined contribution (DC) plans in the private sector, focusing on: (1) eligibility requirements for employee participation; (2) arrangements for employer and participant contributions; (3) eligibility requirements for employee rights to accrued benefits; (4) employee investment options; (5) loan and other provisions for participant access to plan assets while still employed; (6) options for withdrawal of benefits upon separation or retirement; (7) the six features for the Thrift Savings Plan; and (8) a summary of the explanations provided in retirement literature and by pension experts on why employers might decide to sponsor more than one pension plan for the same groups of employees. GAO noted that: (1) the designs of DC plans for the 3,297 employers with 100 or more employees that sponsored only single-employer plans in 1993 varied greatly with respect to eligibility requirements, contribution arrangements, accrual of benefits, investment options, loan provisions, and withdrawal options so that no single plan design could be identified as representing a typical DC plan; (2) the employers reported that they generally established eligibility requirements that their employees must satisfy to participate in their plans; (3) ninety-seven percent of the 3,297 employers provided for employer contributions to the plan rather than requiring participants to fully fund their own pensions; (4) employers generally did not include enough information in their summary plan descriptions to allow GAO to determine the maximum potential cost, or liability, of making employer contributions, expressed as a percentage of compensation; (5) although by law participants have always owned their own contributions (and earnings on those contributions) to DC plans, employers have often established minimum service requirements that participants were required to meet before they could own, or become vested in, employer contributions to the plan; (6) the employers used vesting requirements that generally required fewer years of service for employees to own matching contributions, as compared with non-matching contributions; (7) a significant portion of the employers did not specify in their summary plan descriptions whether participants could direct how the contributions made to their accounts were invested, although the subset of larger employers were more likely to so specify; (8) nearly two-thirds of the employers reported providing plan participants access to a portion of their account balances prior to separation from employment; (9) nearly all the employers allowed participants to take their account balances as a lump-sum distribution when they retired; while two thirds allowed participants to withdraw their accounts in even installment payments over a specified period of time, and nearly half provided for an annuity that would produce a regular monthly payment for the rest of the participant's life; and (10) according to pension experts, and pension-related literature, private employers design their pension programs principally to control costs, maximize federal tax incentives, and comply with the Employee Retirement Income Security Act of 1994, as amended, while at the same time structuring their compensation and benefits to support their overall business and financial goals.
Reservists are members of the seven reserve components, which provide trained and qualified persons available for active duty in the armed forces in time of war or national emergency. The Selected Reserve is the largest category of reservists and is designated as essential to wartime missions. The Selected Reserve is also the only category of reservists that is eligible for TRS. As of December 31, 2010, the Selected Reserve included 858,997 members dispersed among the seven reserve components with about two- thirds belonging to the Army Reserve and the Army National Guard. See figure 1 for the number and percentage of Selected Reserve members within each reserve component. Additionally, about two-thirds of the Selected Reserve members are 35 years old or younger (64 percent) and about half are single (52 percent). (See fig. 2.) The NDAA for Fiscal Year 2005 authorized the TRS program and made TRICARE coverage available to certain members of the Selected Reserve. The program was subsequently expanded and restructured by the NDAAs for Fiscal Years 2006 and 2007—although additional program changes were made in subsequent years. In fiscal year 2005, to qualify for TRS, members of the Selected Reserve had to enter into an agreement with their respective reserve components to continue to serve in the Selected Reserve in exchange for TRS coverage, and they were given 1 year of TRS eligibility for every 90 days served in support of a contingency operation. The NDAA for Fiscal Year 2006, which became effective on October 1, 2006, expanded the program, and almost all members of the Selected Reserve and their dependents—regardless of their prior active duty service—had the option of purchasing TRICARE coverage through a monthly premium. The portion of the premium paid by the members of the Selected Reserve and their dependents for TRS coverage varied based on certain qualifying conditions that had to be met, such as whether the member of the Selected Reserve also had access to an employer- sponsored health plan. The NDAA for Fiscal Year 2006 established two levels—which DOD called tiers—of qualification for TRS, in addition to the tier established by the NDAA for Fiscal Year 2005, with enrollees paying different portions of the premium based on the tier for which they qualified. The NDAA for Fiscal Year 2007 significantly restructured the TRS program by eliminating the three-tiered premium structure and establishing open enrollment for members of the Selected Reserve provided that they are not eligible for or currently enrolled in the FEHB Program. The act removed the requirement that members of the Selected Reserve sign service agreements to qualify for TRS. Instead, the act established that members of the Selected Reserve qualify for TRS for the duration of their service in the Selected Reserve. DOD implemented these changes on October 1, 2007. Generally, TRICARE provides its benefits through several options for its non-Medicare-eligible beneficiary population. These options vary according to TRICARE beneficiary enrollment requirements, the choices TRICARE beneficiaries have in selecting civilian and military treatment facility providers, and the amount TRICARE beneficiaries must contribute toward the cost of their care. Table 1 provides information about these options. Selected Reserve members have a cycle of coverage during which they are eligible for different TRICARE options based on their duty status— preactivation, active duty, deactivation, and inactive. During preactivation, when members of the Selected Reserve are notified that they will serve on active duty in support of a contingency operation in the near future, they and their families are eligible to enroll in TRICARE Prime, and therefore, they do not need to purchase TRS coverage. This is commonly referred to as “early eligibility” and continues uninterrupted once members of the Selected Reserve begin active duty. While on active duty, members are required to enroll in TRICARE Prime. Similarly during deactivation, for 180 days after returning from active duty in support of a contingency operation, members of the Selected Reserve are rendered eligible for the Transitional Assistance Management Program, a program to transition back to civilian life in which members and dependents can use the TRICARE Standard or Extra options. When members of the Selected Reserve return to inactive status, they can choose to purchase TRS coverage if eligible. As a result of the TRICARE coverage cycle and program eligibility requirements, TMA officials estimate that at any given time, fewer than half of the members of the Selected Reserve are qualified to purchase TRS. Currently, to qualify for TRS, a member of the Selected Reserve must not be eligible for the FEHB Program, have been notified that he or she will serve on active duty in support of a be serving on active duty or have recently, that is, within 180 days, returned from active duty in support of a contingency operation. Of the more than 390,000 members eligible, about 67,000 members were enrolled in TRS as of December 31, 2010. (See fig. 3.) A number of different DOD entities have various responsibilities related to TRS. Within the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Defense for Reserve Affairs works with the seven reserve components to determine whether members of the Selected Reserve are eligible for TRS and to ensure that members have information about TRICARE, including TRS. Within TMA, the Warrior Support Branch is responsible for managing the TRS option, which includes developing policy and regulations. This office also works with TMA’s Communication and Customer Service Division to develop educational materials for this program. The Assistant Secretary of Defense for Health Affairs oversees TMA and reports to the Under Secretary of Defense for Personnel and Readiness. TMA works with contractors to manage civilian health care and other services in each TRICARE region (North, South, and West). The contractors are required to establish and maintain sufficient networks of civilian providers within certain designated areas, called Prime Service Areas, to ensure access to civilian providers for all TRICARE beneficiaries, regardless of enrollment status or Medicare eligibility. They are also responsible for helping TRICARE beneficiaries locate providers and for informing and educating TRICARE beneficiaries and providers on all aspects of the TRICARE program, including TRS. TMA’s TRICARE Regional Offices, located in each of the three TRICARE regions, are responsible for managing health care delivery for all TRICARE options in their respective geographic areas and overseeing the contractors, including monitoring network quality and adequacy, monitoring customer satisfaction outcomes, and coordinating appointment and referral management policies. DOD does not have reasonable assurance that members of the Selected Reserve are informed about TRS for several reasons. First, the reserve components do not have a centralized point of contact to ensure that members are educated about the program. Second, the contractors are challenged in their ability to educate the reserve component units in their respective regions because they do not have comprehensive information about the units in their areas of responsibility. And, finally, DOD cannot say with certainty whether Selected Reserve members are knowledgeable about TRS because the results of two surveys that gauged members’ awareness of the program may not be representative of the Selected Reserve population because of low response rates. A 2007 policy from the Under Secretary of Defense for Personnel and Readiness designated the reserve components as having responsibility for providing information about TRS to members of the Selected Reserve at least once a year. When the policy was first issued, officials from the Office of Reserve Affairs—who have oversight responsibility for the reserve components—told us that they met with officials from each of the reserve components to discuss how the components would fulfill this responsibility. However, according to officials from the Office of Reserve Affairs, they have not met with the reserve components since 2008 to discuss how the components are fulfilling their TRS education responsibilities under the policy. These officials explained that they have experienced difficulties identifying a representative from each of the reserve components to attend meetings about TRS education. When we contacted officials from all seven reserve components to discuss TRS education, we had similar experiences. Three of the components had difficulties providing a point of contact. In fact, two of the components took several months to identify an official whom we could speak with about TRS education, and the other one had difficulties identifying someone who could answer our follow-up questions when our original point of contact was no longer available. Furthermore, officials from three of the seven components told us that they were not aware of this policy. Regardless of their knowledge of the 2007 policy, officials from all of the reserve components told us that education responsibilities are delegated to their unit commanders. These responsibilities include informing members about their health options, which would include TRS. All of the components provide various means of support to their unit commanders to help fulfill this responsibility. For example, three of the components provide information about TRICARE directly to their unit commanders or the commanders’ designees through briefings. The four other components provide information to their unit commanders through other means, such as policy documents, Web sites, and newsletters. Additionally, while most of the components had someone designated to answer TRICARE benefit questions, only one of the reserve components had an official at the headquarters level designated as a central point of contact for TRICARE education, including TRS. This official told us that he was unaware of the specific 2007 TRS education policy; however, he said his responsibilities for TRS education include developing annual communication plans, providing briefings to unit commanders, and publishing articles in the Air Force magazine about TRS. Designating a point of contact is important because a key factor in meeting standards for internal control in federal agencies is defining and assigning key areas of authority and responsibility—such as a point of contact for a specific policy. Without a point of contact to ensure that this policy is implemented, the reserve components are running the risk that some of their Selected Reserve members may not be receiving information about the TRS program—especially since some of the reserve component officials we met with were unaware of the policy. The TRICARE contractors are required to provide an annual briefing about TRS to each reserve component unit in their regions, including both Reserve and National Guard units. All three contractors told us that they maintain education representatives who are responsible for educating members of the Selected Reserve on TRS. These representatives conduct unit outreach and provide information to members of the Selected Reserve at any time during predeployment and demobilization, at family events, and during drill weekends. The contractors use briefing materials maintained by TMA and posted on the TRICARE Web site. In addition to conducting briefings, the three contractors have increased their outreach efforts in various ways, including creating an online tutorial that explains TRS, mailing TRS information to Selected Reserve members, and working closely with Family Program coordinators to provide TRS information to family members. However, the contractors are challenged in their ability to meet their requirement for briefing all units annually. First, they typically provide briefings to units upon request because this approach is practical based upon units’ schedules and availability. For example, officials from one contractor told us that even though they know when geographically dispersed units will be gathering in one location, these units have busy schedules and may not have time for the contractor to provide a briefing. Each contractor records the briefings that are requested, when the briefing requests were fulfilled and by whom, and any questions or concerns that resulted from the briefings. However, some unit commanders do not request briefings from the contractors. For example, officials with one reserve component told us that they do not rely on the contractor to brief units because they were unaware that the contractors provided this service. In addition, these officials as well as officials from another reserve component told us that they did not know if their unit commanders were aware that they could request briefings from the contractors. All of the contractors told us that they conduct outreach to offer information to some of the units that have not requested a briefing, including both calling units to offer a briefing and providing materials. They added that more outreach is conducted to National Guard units because they are able to obtain information about these units from state officials. The TRICARE Regional Offices also told us that they conduct outreach to units to let them know that the contractor is available to brief the units about TRS. However, even though the contractor and the TRICARE Regional Offices conduct outreach to a unit, it does not necessarily mean that the unit will request a briefing. Furthermore, while contractors are aware of some units in their regions, they do not have access to comprehensive lists of all reserve component units in their regions because the Web site links containing unit information that TMA originally provided to the contractors have become inactive. As a result, the contractors are not able to verify whether all units in their regions have received briefings. Officials from the Office of Reserve Affairs told us that reserve components report unit information to the Defense Manpower Data Center (DMDC), which maintains personnel information about all members of the military. However, these officials raised concerns about the accuracy of this information because it could be about 3 to 6 months old and may not be comprehensive. Officials at the Office of Reserve Affairs told us that the reserve components would likely have more up-to-date information about their units as they are responsible for reporting this information to DMDC. However, officials from TMA, the TRICARE Regional Offices, and contractors also told us that a comprehensive list of units would be difficult to maintain because the unit structure changes frequently. Despite the challenges contractors face, officials with TMA’s Warrior Support Branch told us that they are satisfied with the contractors’ efforts to provide TRS briefings to the reserve component units in their regions. However, because officials do not know which units have been briefed on the program, there is a risk that some reserve component members are not receiving sufficient information on TRS and may not be taking advantage of coverage available to them. DOD has conducted two surveys that gauge whether members of the Selected Reserve are aware of TRS, among other issues. In 2008, TMA conducted the Focused Survey of TRICARE Reserve Select and Selected Reserve Military Health System Access and Satisfaction to better understand reserve component members’ motivation for enrolling in TRS and to compare TRS enrollees’ satisfaction with and access to health care services with that of other beneficiary groups. In reporting the results of this survey to Congress in February 2009, TMA stated that lack of awareness was an important factor in why eligible members of the Selected Reserve did not enroll in TRS. TMA also reported that less than half of the eligible Selected Reserve members who were not enrolled in TRS were aware of the program. However, the survey’s response rate was almost 18 percent, and such a low response rate decreases the likelihood that the survey results were representative of the views and characteristics of the Selected Reserve population. According to the Office of Management and Budget’s standards for statistical surveys, a nonresponse analysis is recommended for surveys with response rates lower than 80 percent to determine whether the responses are representative of the surveyed population. Accordingly, TMA conducted a nonresponse analysis to determine whether the survey responses it received were representative of the surveyed population, and the analysis identified substantial differences between the original respondents and the follow-up respondents. As a result of the differences found in the nonresponse analysis, TMA adjusted the statistical weighting techniques for nonresponse bias and applied the weights to the data before drawing conclusions and reporting the results. DMDC conducts a quarterly survey, called the Status of Forces Survey, which is directed to all members of the military services. DMDC conducts several versions of this survey, including a version for members of the reserve components. This survey focuses on different issues at different points in time. For example, every other year the survey includes questions on health benefits, including questions on whether members of the reserve components are aware of TRICARE, including TRS. In July 2010, we issued a report raising concerns about the reliability of DOD’s Status of Forces Surveys because they generally have a 25 to 42 percent response rate, and DMDC has not been conducting nonresponse analyses to determine whether the surveys’ results are representative of the target population. We recommended that DMDC develop and implement guidance both for conducting a nonresponse analysis and using the results of this analysis to inform DMDC’s statistical weighting techniques, as part of the collection and analysis of the Status of Forces Survey results. DOD concurred with this recommendation, but as of January 2011, had not implemented it. DOD monitors access to civilian providers under TRS in conjunction with monitoring efforts related to the TRICARE Standard and Extra options. In addition, during the course of our review, TMA initiated additional efforts that specifically examine access to civilian providers for TRS beneficiaries and the Selected Reserve population, including mapping the locations of Selected Reserve members in relation to areas with TRICARE provider networks. Because TRS is the same benefit as the TRICARE Standard and Extra options, DOD monitors TRS beneficiaries’ access to civilian providers as a part of monitoring access to civilian providers for beneficiaries who use TRICARE Standard and Extra. As we have recently reported, in the absence of access-to-care standards for these options, TMA has mainly used feedback mechanisms to gauge access to civilian providers for these beneficiaries. For example, in response to a mandate included in the NDAA for Fiscal Year 2008, DOD has completed 2 years of a multiyear survey of beneficiaries who use the TRICARE Standard, TRICARE Extra, and TRS options and 2 years of its second multiyear survey of civilian providers. Congress required that these surveys obtain information on access to care and that DOD give a high priority to locations having high concentrations of Selected Reserve members. In March 2010, we reported that TMA generally addressed the methodological requirements outlined in the mandate during the implementation of the first year of the multiyear surveys. While TMA did not give a high priority to locations with high concentrations of Selected Reserve members, TMA’s methodological approach over the 4-year survey period will cover the entire United States, including areas with high concentrations of Selected Reserve members. In February 2010, TMA directed the TRICARE Regional Offices to monitor access to civilian providers for TRICARE Standard, TRICARE Extra, and TRS beneficiaries through the development of a model that can be used to identify geographic areas where beneficiaries may experience access problems. As of May 2010, each of the TRICARE Regional Offices had implemented an initial model appropriate to its region. These models include, for example, data on area populations, provider types, and potential provider shortages for the general population. Officials at each regional office said that their models are useful but noted that they are evolving and will be updated. To determine whether jointly monitoring access to civilian providers for TRS beneficiaries along with TRICARE Standard and Extra beneficiaries was reasonable, we asked TMA to perform an analysis of claims (for fiscal years 2008, 2009, and 2010) to identify differences in age demographics and health care utilization between these beneficiary groups. This analysis found that although the age demographics for these populations were different—more than half of the TRS beneficiaries were age 29 and under, while more than half of the TRICARE Standard and Extra beneficiaries were over 45—both groups otherwise shared similarities with their health care utilization. Specifically, both beneficiary groups had similar diagnoses, used the same types of specialty providers, and used similar proportions of mental health care, primary care, and specialty care. (See fig. 4.) Specifically: Seven of the top 10 diagnoses for both TRS and TRICARE Standard and Extra beneficiaries were the same. Three of these diagnoses—allergic rhinitis, joint disorder, and back disorder—made up more than 20 percent of claims for both beneficiary groups. The five provider specialties that filed the most claims for both beneficiary groups were the same—family practice, physical therapy, allergy, internal medicine, and pediatrics. Furthermore, the majority of claims filed for both beneficiary groups were filed by family practice providers. Both beneficiary groups had the same percentage of claims filed for mental health care and similar percentages for primary care and other specialty care. (See app. II for additional details on the results of this claims analysis.) Based on this analysis, jointly monitoring access for TRS beneficiaries and TRICARE Standard and Extra beneficiaries appears to be a reasonable approach. DOD has taken steps to evaluate access to civilian providers for the Selected Reserve population and TRS beneficiaries separately from other TRICARE beneficiaries. Specifically, during the course of our review, TMA initiated the following efforts: During the fall of 2010, TMA officials analyzed the locations of Selected Reserve members and their families, including TRS beneficiaries, to determine what percentage of them live within TRICARE’s Prime Service Areas (areas in which the managed care contractors are required to establish and maintain sufficient networks of civilian providers). According to these data, as of August 31, 2010, over 80 percent of Selected Reserve members and their families lived in Prime Service Areas: 100 percent in the South region, which is all Prime Service Areas, and over 70 percent in the North and West regions. TMA officials told us that they are repeating the Focused Survey of TRICARE Reserve Select and Select Reserve Military Health System Access and Satisfaction, which had first been conducted in 2008. Using results from its first survey, TMA reported to Congress in February 2009 that members of the Selected Reserve who were enrolled in TRS were pleased with access and quality of care under their plan. However, as we have noted, the response rate for this survey was almost 18 percent, although TMA took steps to adjust the data prior to reporting the results. Officials told us that the follow-up survey will focus on whether access to care for TRS beneficiaries has changed. Officials sent the survey instrument to participants in January 2011. Officials told us that they anticipate results will be available during the summer of 2011. TRS is an important option for members of the Selected Reserve. However, educating this population about TRS has been challenging, and despite efforts by the reserve components and the contractors, some members of the Selected Reserve are likely still unaware of this option. Most of the reserve components lack centralized accountability for TRS education, making it unclear if all members are getting information about the program—a concern that is further exacerbated by the lack of awareness about the TRS education policy among officials from some of the reserve components. Additionally, the contractors’ limitations in briefing all of the units in their regions about TRS make each component’s need for a central point of contact more evident. Without centralized accountability, the reserve components do not have assurance that all members of the Selected Reserve who may need TRS have the information they need to take advantage of the health care options available to them. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Reserve Affairs to develop a policy that requires each reserve component to designate a centralized point of contact for TRS education, who will be accountable for ensuring that the reserve components are providing information about TRS to their Selected Reserve members annually. In establishing responsibilities for the centralized points of contact, DOD should explicitly task them with coordinating with their respective TRICARE Regional Offices to ensure that contractors are provided information on the number and location of reserve component units in their regions. In commenting on a draft of this report, DOD partially concurred with our recommendation. (DOD’s comments are reprinted in app. III.) Specifically, DOD agreed that the Assistant Secretary of Defense for Reserve Affairs should develop a policy that requires each of the seven reserve components to designate a central point of contact for TRS education that will be accountable for providing information about TRS to their Selected Reserve members annually. However, DOD countered that each designee should coordinate the provision of reserve unit information through the TRICARE Regional Offices rather than communicating directly with the TRICARE contractors, noting that the TRICARE Regional Offices have oversight responsibility for the contractors in their respective regions. We understand the department’s concern about coordinating contractor communications through the TRICARE Regional Offices, and we have modified our recommendation accordingly. DOD also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Defense and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. We asked the TRICARE Management Activity (TMA) to conduct an analysis of claims filed for TRICARE Reserve Select (TRS) beneficiaries and TRICARE Standard and Extra beneficiaries. We requested claims data for the most recent three complete fiscal years—2008, 2009, and 2010— based on the fact that the program last experienced changes with eligibility and premiums in fiscal year 2007. For the purpose of this analysis, claims consist of all services provided by a professional in an office or other setting outside of an institution. Records of services rendered at a hospital or other institution were excluded from this analysis. In addition, records for medical supplies and from chiropractors and pharmacies were also excluded. We asked TMA to conduct the following comparative analyses for TRS beneficiaries and TRICARE Standard and Extra beneficiaries: 1. Demographics, including age for each year and averaged over 3 years 2. Percentage of claims filed for primary care, mental health, and other specialists each year for 3 years 3. The top 10 procedures in ranking order made each year and the 4. The top 10 primary diagnoses in ranking order made each year and the 5. The top five provider specialties in ranking order visited each year and the average over 3 years 6. Percentage of claims filed for the top five provider specialties and the To ensure that TMA’s data were sufficiently reliable, we conducted data reliability assessments of the data sets that we used to assess their quality and methodological soundness. Our review consisted of (1) examining documents that described the respective data, (2) interviewing TMA officials about the data collection and analysis processes, and (3) interviewing TMA officials about internal controls in place to ensure that data are complete and accurate. We found that all of the data sets used in this report were sufficiently reliable for our purposes. However, we did not independently verify TMA’s calculations. Tables 2 through 5 contain information on claims filed for TRICARE Reserve Select and TRICARE Standard and Extra beneficiaries. In addition to the contact named above, Bonnie Anderson, Assistant Director; Danielle Bernstein; Susannah Bloch; Ashley Dean; Lisa Motley; Jessica Smith; and Suzanne Worth made key contributions to this report.
TRICARE Reserve Select (TRS) provides certain members of the Selected Reserve--reservists considered essential to wartime missions--with the ability to purchase health care coverage under the Department of Defense's (DOD) TRICARE program after their active duty coverage expires. TRS is similar to TRICARE Standard, a fee-forservice option, and TRICARE Extra, a preferred provider option. The National Defense Authorization Act for Fiscal Year 2008 directed GAO to review TRS education and access to care for TRS beneficiaries. This report examines (1) how DOD ensures that members of the Selected Reserve are informed about TRS and (2) how DOD monitors and evaluates access to civilian providers for TRS beneficiaries. GAO reviewed and analyzed documents and evaluated an analysis of claims conducted by DOD. GAO also interviewed officials with the TRICARE Management Activity (TMA), the DOD entity responsible for managing TRICARE; the regional TRICARE contractors; the Office of Reserve Affairs; and the seven reserve components. DOD does not have reasonable assurance that Selected Reserve members are informed about TRS. A 2007 policy designated the reserve components as having responsibility for providing information about TRS to Selected Reserve members on an annual basis; however, officials from three of the seven components told GAO that they were unaware of this policy. Additionally, only one of the reserve components had a designated official at the headquarters level acting as a central point of contact for TRICARE education, including TRS. Without centralized responsibility for TRS education, the reserve components cannot ensure that all eligible Selected Reserve members are receiving information about the TRS program. Compounding this, the managed care support contractors that manage civilian health care are limited in their ability to educate all reserve component units in their regions as required by their contracts because they do not have access to comprehensive information about these units, and some units choose not to use the contractors to help educate their members about TRS. Nonetheless, DOD officials stated that they were satisfied with the contractors' efforts to educate units upon request and to conduct outreach. Lastly, it is difficult to determine whether Selected Reserve members are knowledgeable about TRS because the results of two DOD surveys that gauged members' awareness of the program may not be representative because of low response rates. Because TRS is the same benefit as the TRICARE Standard and Extra options, DOD monitors access to civilian providers for TRS beneficiaries in conjunction with TRICARE Standard and Extra beneficiaries. DOD has mainly used feedback mechanisms, such as surveys, to gauge access to civilian providers for these beneficiaries in the absence of access standards for these options. GAO found that jointly monitoring access for these two beneficiary groups is reasonable because a claims analysis showed that TRS beneficiaries and TRICARE Standard and Extra beneficiaries had similar health care utilization. Also, during the course of GAO's review, TMA initiated other efforts that specifically evaluated access to civilian providers for the Selected Reserve population and TRS beneficiaries, including mapping the locations of Selected Reserve members in relation to areas with TRICARE provider networks. GAO recommends that the Secretary of Defense direct the Assistant Secretary of Defense for Reserve Affairs to develop a policy requiring each reserve component to designate a centralized point of contact for TRS education. DOD partially concurred with this recommendation, citing a concern about regional coordination. GAO modified the recommendation.
Because no drug is absolutely safe, FDA approves a drug for marketing when the agency judges that its known benefits outweigh its known risks. After a drug is on the market, FDA continues to assess its risks and benefits. FDA reviews reports of adverse drug reactions (adverse events) related to the drug and information from clinical studies about the drug that are conducted by the drug’s sponsor. FDA also reviews adverse events from studies that follow the use of drugs in ongoing medical care (observational studies) that are carried out by the drug’s sponsor, FDA, or other researchers. If FDA has information that a drug on the market may pose a significant health risk to consumers, it weighs the effect of the adverse events against the benefit of the drug to determine what actions, if any, are warranted. The decision-making process for postmarket drug safety is complex, involving input from a variety of FDA staff and organizational units and information sources, but the central focus of the process is the iterative interaction between OND and ODS. OND is a much larger office than ODS. In fiscal year 2005, OND had 715 staff and expenditures of $110.6 million. More than half of OND’s expenditures in fiscal year 2005, or $57.2 million, came from user fees paid by drug sponsors under the Prescription Drug User Fee Amendments of 2002. ODS had 106 staff in fiscal year 2005 and expenditures of $26.9 million, with $7.6 million from prescription drug user fees. After a drug is on the market, OND staff receive information about safety issues in several ways. First, OND staff receive notification of adverse event reports for drugs to which they are assigned and they review the periodic adverse event reports that are submitted by drug sponsors. Second, OND staff review safety information that is submitted to FDA when a sponsor seeks approval for a new use or formulation of a drug, and monitor completion of postmarket studies. When consulting with OND on a safety issue, ODS staff search for all relevant case reports of adverse events and assess them to determine whether or not the drug caused the adverse event and whether there are any common trends or risk factors. ODS staff might also use information from observational studies and drug use analyses to analyze the safety issue. When completed, ODS staff summarize their analysis in a written consult. According to FDA officials, OND staff within the review divisions usually decide what regulatory action should occur, if any, by considering the results of the safety analysis in the context of other factors such as the availability of other similar drugs and the severity of the condition the drug is designed to treat. Then, if necessary, OND staff make a decision about what action should be taken. Several CDER staff, including staff from OND and ODS, told us that most of the time there is agreement within FDA about what safety actions should be taken. At other times, however, OND and ODS staff disagree about whether the postmarket data are adequate to establish the existence of a safety problem or support a recommended regulatory action. In those cases, OND staff sometimes request additional analyses by ODS and sometimes there is involvement from other FDA organizations. In some cases, OND seeks the advice of FDA’s scientific advisory committees, which are composed of experts and consumer representatives from outside FDA. In 2002, FDA established the Drug Safety and Risk Management Advisory Committee, 1 of the 16 human-drug-related scientific advisory committees, to specifically advise FDA on drug safety and risk management issues. The recommendations of the advisory committees do not bind the agency to any decision. FDA has the authority to withdraw the approval of a drug on the market for safety-related and other reasons, although it rarely does so. In almost all cases of drug withdrawals for safety reasons, the drug’s sponsor has voluntarily removed the drug from the market. For example, in 2001 Baycol’s sponsor voluntarily withdrew the drug from the market after meeting with FDA to discuss reports of adverse events, including some reports of fatalities. FDA does not have explicit authority to require that drug sponsors take other safety actions; however, when FDA identifies a potential problem, sponsors generally negotiate with FDA to develop a mutually agreeable remedy to avoid other regulatory action. Negotiations may result in revised drug labeling or restricted distribution. FDA has limited authority to require that sponsors conduct postmarket safety studies. In our March 2006 report, we found that FDA’s postmarket drug safety decision-making process was limited by a lack of clarity, insufficient oversight by management, and data constraints. We observed that there was a lack of established criteria for determining what safety actions to take and when, and aspects of ODS’s role in the process were unclear. A lack of communication between ODS and OND’s review divisions and limited oversight of postmarket drug safety issues by ODS management hindered the decision-making process. FDA’s decisions regarding postmarket drug safety have also been made more difficult by the constraints it faces in obtaining data. While acknowledging the complexity of the postmarket drug safety decision-making process, we found through our interviews with OND and ODS staff and in our case studies that the process lacked clarity about how drug safety decisions were made and about the role of ODS. If FDA had established criteria for determining what safety actions to take and when, then some of the disagreements we observed in our case studies might have been resolved more quickly. In the absence of established criteria, several FDA officials told us that decisions about safety actions were often based on the case-by-case judgments of the individuals reviewing the data. Our observations were consistent with two previous internal FDA reports on the agency’s internal deliberations regarding Propulsid and the diabetes drug Rezulin. In those reviews FDA indicated that an absence of established criteria for determining what safety actions to take, and when to take them, posed a challenge for making postmarket drug safety decisions. We also found that ODS’s role in scientific advisory committee meetings was unclear. According to the OND Director, OND is responsible for setting the agenda for the advisory committee meetings, with the exception of the Drug Safety and Risk Management Advisory Committee. This includes who is to present and what issues will be discussed by the advisory committees. For the advisory committees (other than the Drug Safety and Risk Management Advisory Committee) it was unclear when ODS staff would participate. A lack of communication between ODS and OND’s review divisions and limited oversight of postmarket drug safety issues by ODS management also hindered the decision-making process. ODS and OND staff often described their relationship with each other as generally collaborative, with effective communication, but both ODS and OND staff told us that there had been communication problems on some occasions, and that this had been an ongoing concern. For example, according to some ODS staff, OND did not always adequately communicate the key question or point of interest to ODS when it requested a consult, and as ODS worked on the consult there was sometimes little interaction between the two offices. After a consult was completed and sent to OND, ODS staff reported that OND sometimes did not respond in a timely manner or at all. Several ODS staff characterized this as consults falling into a “black hole” or “abyss.” OND’s Director told us that OND staff probably do not “close the loop” in responding to ODS’s consults, which includes explaining why certain ODS recommendations were not followed. In some cases CDER managers and OND staff criticized the methods used in ODS consults and told us that the consults were too lengthy and academic. ODS management had not effectively overseen postmarket drug safety issues, and as a result, it was unclear how FDA could know that important safety concerns had been addressed and resolved in a timely manner. A former ODS Director told us that the small size of ODS’s management team presented a challenge for effective oversight of postmarket drug safety issues. Another problem was the lack of systematic information on drug safety issues. According to the ODS Director, ODS maintained a database of consults that provided some information about the consults that ODS staff conducted, but it did not include information about whether ODS staff made recommendations for safety actions and how the safety issues were handled and resolved, such as whether recommended safety actions were implemented by OND. Data constraints—such as weaknesses in data sources and FDA’s limited ability to require certain studies and obtain additional data—have contributed to FDA’s difficulty in making postmarket drug safety decisions. OND and ODS have used three different sources of data to make postmarket drug safety decisions, including adverse event reports, clinical trial studies, and observational studies. While data from each source have weaknesses that have contributed to the difficulty in making postmarket drug safety decisions, evidence from more than one source can help inform the postmarket decision-making process. The availability of these data sources has been constrained, however, because of FDA’s limited authority to require drug sponsors to conduct postmarket studies and its resources. While decisions about postmarket drug safety have often been based on adverse event reports, FDA cannot establish the true frequency of adverse events in the population with data from adverse event reports. The inability to calculate the true frequency makes it hard to establish the magnitude of a safety problem, and comparisons of risks across similar drugs are difficult. In addition, it is difficult to attribute adverse events to particular drugs when there is a relatively high incidence rate in the population for the medical condition. It is also difficult to attribute adverse events to the use of particular drugs because data from adverse event reports may have been confounded by other factors, such as other drug exposures. FDA can also use available data from clinical trials and observational studies to support postmarket drug safety decisions. Although each source presents weaknesses that constrain the usefulness of the data provided, having data from more than one source can help improve FDA’s decision- making ability. Clinical trials, in particular randomized clinical trials, are considered the “gold standard” for assessing evidence about efficacy and safety because they are considered the strongest method by which one can determine whether new drugs work. However, clinical trials also have weaknesses. Clinical trials typically have too few enrolled patients to detect serious adverse events associated with a drug that occur relatively infrequently in the population being studied. They are usually carried out on homogenous populations of patients that often do not reflect the types of patients who will actually take the drugs. For example, they do not often include those who have other medical problems or take other medications. In addition, clinical trials are often too short in duration to identify adverse events that may occur only after long use of the drug. This is particularly important for drugs used to treat chronic conditions where patients are taking the medications for the long term. Observational studies, which use data obtained from population-based sources, can provide FDA with information about the population effect and risk associated with the use of a particular drug. We have found that FDA’s access to postmarket clinical trial and observational data is limited by its authority and available resources. FDA does not have broad authority to require that a drug sponsor conduct an observational study or clinical trial for the purpose of investigating a specific postmarket safety concern. One senior FDA official and several outside drug safety experts told us that FDA needs greater authority to require such studies. Long-term clinical trials may be needed to answer safety questions about risks associated with the long-term use of drugs. For example, during a February 2005 scientific advisory committee meeting, some FDA staff and committee members indicated that there was a need for better information on the long-term use of anti-inflammatory drugs and discussed how a long-term trial might be designed to study the cardiovascular risks associated with the use of these drugs. Lacking specific authority to require drug sponsors to conduct postmarket studies, FDA has often relied on drug sponsors voluntarily agreeing to conduct these studies. But the postmarket studies that drug sponsors have agreed to conduct have not consistently been completed. One study estimated that the completion rate of postmarket studies, including those that sponsors had voluntarily agreed to conduct, rose from 17 percent in the mid-1980s to 24 percent between 1991 and 2003. FDA has little leverage to ensure that these studies are carried out. In terms of resource limitations, several FDA staff (including CDER managers) and outside drug safety experts told us that in the past ODS has not had enough resources for cooperative agreements to support its postmarket drug surveillance program. Under the cooperative agreement program, FDA collaborated with outside researchers in order to access a wide range of population-based data and conduct research on drug safety. Annual funding for this program was less than $1 million from fiscal year 2002 through fiscal year 2005. In 2006, FDA awarded four contracts for a total cost of $1.6 million per year to replace the cooperative agreements. Prior to the completion of our March 2006 report, FDA began several initiatives to improve its postmarket drug safety decision-making process. Most prominently, FDA commissioned the IOM to convene a committee of experts to assess the current system for evaluating postmarket drug safety, including FDA’s oversight of postmarket safety and its processes. IOM issued its report in September 2006. FDA also had underway several organizational changes that we discussed in our 2006 report. For example, FDA established the Drug Safety Oversight Board to help provide oversight and advice to the CDER Director on the management of important safety issues. The board is involved with ensuring that broader safety issues, such as ongoing delays in changing a label, are effectively resolved. FDA also drafted a policy that was designed to ensure that all major postmarket safety recommendations would be discussed by involved OND and ODS managers, beginning at the division level, and documented. FDA implemented a pilot program for dispute resolution that is designed for individual CDER staff to have their views heard when they disagree with a decision that could have a significant negative effect on public health. Because the CDER Director is involved in determining whether the process will be initiated, appoints a panel chair to review the case, and makes the final decision on how the dispute should be resolved, we found that the pilot program does not offer CDER staff an independent forum for resolving disputes. FDA also began to explore ways to access additional data sources that it can obtain under its current authority, such as data on Medicare beneficiaries’ experience with prescription drugs covered under the prescription drug benefit. Since our report, FDA has made efforts to improve its postmarket safety decision-making and oversight process. In its written response to the IOM recommendations, FDA agreed with the goal of many of the recommendations made by GAO and IOM. In that response, FDA stated that it would take steps to improve the “culture of safety” in CDER, reduce tension between preapproval and postapproval staff, clarify the roles and responsibilities of pre- and postmarket staff, and improve methods for resolving scientific disagreements. FDA has also begun several initiatives since our March 2006 report that we believe could address three of our four recommendations. Because none of these initiatives were fully implemented as of May 2007, it was too early to evaluate their effectiveness. To make the postmarket safety decision-making process clearer and more effective, we recommended that FDA revise and implement its draft policy on major postmarket drug safety decisions. CDER has made revisions to the draft policy, but has not yet finalized and implemented it. CDER’s Associate Director for Safety Policy and Communication told us that the draft policy provides guidance for making major postmarket safety decisions, including identifying the decision-making officials for safety actions and ensuring that the views of involved FDA staff are documented. According to the Associate Director, the revised draft does not now discuss decisions for more limited safety actions, such as adding a boxed warning to a drug’s label. As a result, fewer postmarket safety recommendations would be required to be discussed by involved OND and ODS managers than envisioned in the draft policy we reviewed for our 2006 report. Separately, FDA has instituted some procedures that are consistent with the goals of the draft policy. For example ODS staff now participate in regular, bimonthly safety meetings with each of the review divisions in OND. To help resolve disagreements over safety decisions, we recommended that FDA improve CDER’s dispute resolution process by revising the pilot program to increase its independence. FDA had not revised its pilot dispute resolution program as of May 2007, and FDA officials told us that the existing program had not been used by any CDER staff member. To make the postmarket safety decision-making process clearer, we recommended that FDA clarify ODS’s role in FDA’s scientific advisory committee meetings involving postmarket drug safety issues. According to an FDA official, the agency intends to, but had not yet, drafted a policy that will describe what safety information should be presented and how such information should be presented at scientific advisory committee meetings. The policy is also expected to clarify ODS’s role in planning for, and participating in, meetings of FDA’s scientific advisory committees. To help ensure that safety concerns were addressed and resolved in a timely manner, we recommended that FDA establish a mechanism for systematically tracking ODS’s recommendations and subsequent safety actions. As of May 2007, FDA was in the process of implementing the Document Archiving, Reporting and Regulatory Tracking System (DARRTS) to track such information on postmarket drug safety issues. Among many other uses, DAARTS will track ODS’s safety recommendations and the responses to them. We also suggested in our report that Congress consider expanding FDA’s authority to require drug sponsors to conduct postmarket studies in order to ensure that the agency has the necessary information, such as clinical trial and observational data, to make postmarket decisions. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact Marcia Crosse at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Martin T. Gahart, Assistant Director; Pamela Dooley; and Cathleen Hamann made key contributions to this statement. Drug Safety: FDA Needs to Further Address Shortcomings in Its Postmarket Decision-making Process. GAO-07-599T. Washington, D.C.: March 22, 2007. Pediatric Drug Research: Studies Conducted under Best Pharmaceuticals for Children Act. GAO-07-557. Washington, D.C.: March 22, 2007. Prescription Drugs: Improvements Needed in FDA’s Oversight of Direct- to-Consumer Advertising. GAO-07-54. Washington, D.C.: November 16, 2006. Internet Pharmacies: Some Pose Safety Risks for Consumers and Are Unreliable in Their Business Practices. GAO-04-888T. Washington, D.C.: June 17, 2004. Internet Pharmacies: Some Pose Safety Risks for Consumers. GAO-04-820. Washington, D.C.: June 17, 2004. Antibiotic Resistance: Federal Agencies Need to Better Focus Efforts to Address Risk to Humans from Antibiotic Use in Animals. GAO-04-490. Washington, D.C.: April 22, 2004. Pediatric Drug Research: Food and Drug Administration Should More Efficiently Monitor Inclusion of Minority Children. GAO-03-950. Washington, D.C.: September 26, 2003. Women’s Health: Women Sufficiently Represented in New Drug Testing, but FDA Oversight Needs Improvement. GAO-01-754. Washington, D.C.: July 6, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2004, several high-profile drug safety cases raised concerns about the Food and Drug Administration's (FDA) ability to manage postmarket drug safety issues. In some cases there were disagreements within FDA about how to address these issues. GAO was asked to testify on FDA's oversight of drug safety. This testimony is based on Drug Safety: Improvement Needed in FDA's Postmarket Decision-making and Oversight Process, GAO-06-402 (Mar. 31, 2006). The report focused on the complex interaction between two offices within FDA that are involved in postmarket drug safety activities: the Office of New Drugs (OND), and the Office of Drug Safety (ODS). OND's primary responsibility is to review new drug applications, but it is also involved in monitoring the safety of marketed drugs. ODS is focused primarily on postmarket drug safety issues. ODS is now called the Office of Surveillance and Epidemiology. For its report, GAO reviewed FDA policies, interviewed FDA staff, and conducted case studies of four drugs with safety issues: Arava, Baycol, Bextra, and Propulsid. To gather information on FDA's initiatives since March 2006 to improve its decision-making process for this testimony, GAO interviewed FDA officials in February and March 2007, and received updated information from FDA in May 2007. In its March 2006 report, GAO found that FDA lacked clear and effective processes for making decisions about, and providing management oversight of, postmarket drug safety issues. There was a lack of clarity about how decisions were made and about organizational roles, insufficient oversight by management, and data constraints. GAO observed that there was a lack of criteria for determining what safety actions to take and when to take them. Insufficient communication between ODS and OND hindered the decision-making process. ODS management did not systematically track information about ongoing postmarket safety issues, including the recommendations that ODS staff made for safety actions. GAO also found that FDA faced data constraints that contributed to the difficulty in making postmarket safety decisions. GAO found that FDA's access to data was constrained by both its limited authority to require drug sponsors to conduct postmarket studies and its limited resources for acquiring data from other external sources. During the course of GAO's work for its March 2006 report, FDA began a variety of initiatives to improve its postmarket drug safety decision-making process, including the establishment of the Drug Safety Oversight Board. FDA also commissioned the Institute of Medicine to examine the drug safety system, including FDA's oversight of postmarket drug safety. GAO recommended in its March 2006 report that FDA take four steps to improve its decision-making process for postmarket safety. GAO recommended that FDA revise and implement its draft policy on the decision-making process for major postmarket safety actions, improve its process to resolve disagreements over safety decisions, clarify ODS's role in scientific advisory committees, and systematically track postmarket drug safety issues. FDA has initiatives underway and under consideration and that, if implemented, could address three of GAO's four recommendations. In the 2006 report GAO also suggested that Congress consider expanding FDA's authority to require drug sponsors to conduct postmarket studies, as needed, to collect additional data on drug safety concerns.
Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer’s disease, that necessitates supervision to avoid harming themselves or others or need assistance with tasks such as taking medications. Although a physical or mental disability may occur at any age, the older an individual becomes, the more likely it is that a disabling condition will develop or worsen. Assistance for such needs takes many forms and takes place in varied settings, including care in nursing homes or alternative community-based residential settings such as assisted living facilities. For individuals remaining in their homes, in-home care services or unpaid care from family members or other informal caregivers is most common. Approximately 64 percent of all elderly individuals with a disability relied exclusively on unpaid care from family or other informal caregivers; even among almost totally dependent elderly—those with difficulty performing five activities of daily living—about 41 percent relied entirely on unpaid care. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. In 2000, Medicaid paid 46 percent (about $63 billion) of the $137 billion spent on long-term care from all public and private sources. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs. Within broad federal guidelines, states have considerable flexibility in determining who is eligible and what services to cover in their Medicaid program. Among long-term care services, states are required to cover nursing facilities and home health services for Medicaid beneficiaries. States also may choose to cover additional long-term care services that are not mandatory under federal standards, such as personal care services, private-duty nursing care, and rehabilitative services. For services that a state chooses to cover under its state Medicaid plan as approved by the Centers for Medicare & Medicaid Services (CMS), enrollment for those eligible cannot be limited but benefits may be. For example, states can limit the personal care service benefit through medical necessity requirements and utilization controls. States may also cover Medicaid home and community-based services (HCBS) through waivers of certain statutory requirements under section 1915(c) of the Social Security Act, thereby receiving greater flexibility in the provision of long-term care services. These waivers permit states to adopt a variety of strategies to control the cost and use of services. For example, states may obtain CMS approval to waive certain provisions of the Medicaid statute, such as the requirement that states make all services available to all eligible individuals statewide. With a waiver, states can target services to individuals on the basis of certain criteria such as disease, age, or geographic location. Further, states may limit the number of persons served to a specified target, requiring additional persons meeting eligibility and need criteria to be put on a waiting list. Limits may also be placed on the costs of services that will be covered by Medicaid. To obtain CMS approval for an HCBS waiver, states must demonstrate that the cost of the services to be provided under a waiver (plus other state Medicaid services) is no more than the cost of institutional care (plus any other Medicaid services provided to institutionalized individuals). These waivers permit states to cover a wide variety of nonmedical and social services and supports that allow people to remain at home or in the community, including personal care, personal emergency response systems, homemakers’ assistance, chore assistance, adult day care, and other services. Medicare—the federal health financing program covering nearly 40 million Americans who are aged 65 or older, disabled, or have end-stage renal disease—primarily covers acute care, but it also pays for limited post- acute stays in skilled nursing facilities and home health care. Medicare spending accounted for 14 percent (about $19 billion) of total long-term care expenditures in 2000. A new home health prospective payment system was implemented in October 2000 that would allow a higher number of home health visits per user than under the previous interim payment system while also providing incentives to reward efficiency and control use of services. The number of home health visits declined from about 29 visits per episode immediately prior to the prospective payment system being implemented to 22 visits per episode during the first half of 2001. Most of the decline was in home health aide visits. The four states we reviewed allocated different proportions of Medicaid long-term care expenditures for the elderly to federally required long-term care services, such as nursing facilities and home health, and to state optional home and community-based care, such as in-home personal support, adult day care, and care in alternate residential care settings. As the following examples illustrate, the states also differed in how they designed their home and community-based services, influencing the extent to which these services were available to elderly individuals with disabilities. New York spent $2,463 per person aged 65 or older in 1999 on Medicaid long-term care services for the elderly—much higher than the national average of $996. While nursing home care represented 68 percent of New York’s expenditures, New York also spent more than the national average on state optional long-term care services, such as personal support services. Because most home and community-based services in New York were covered as part of the state Medicaid plan, these services were largely available to all eligible Medicaid beneficiaries needing them without caps on the numbers of individuals served. Louisiana spent $1,012 per person aged 65 or older, slightly higher than the national average of $996. Nursing home care accounted for 93 percent of Louisiana’s expenditures, higher than the national average of 81 percent. Most home and community-based services available in Louisiana for the elderly and disabled were offered under HCBS waivers, and the state capped the dollar amount available per day for services and limited the number of recipients. For example, Louisiana’s waiver that covered in- personal care and other services had a $35 per day limit at the time of our work and served approximately 1,500 people in July 2002 with a waiting list of 5,000 people. Kansas spent $935 per person aged 65 or older, slightly less than the national average. Most home and community-based services, including in- home care, adult day care, and respite services, were offered under HCBS waivers. As of June 2002, 6,300 Kansans were receiving these HCBS waiver services. However, the HCBS waiver services were not currently available to new recipients because Kansas initiated a waiting list for these services in April 2002, and 290 people were on the waiting list as of June 2002. Oregon spent $604 on Medicaid long-term care services per elderly individual and, in contrast to the other states, spent a lower proportion on nursing facilities and a larger portion on other long-term care services such as care in alternative residential settings. Oregon had HCBS waivers that cover in-home care, environmental modifications to homes, adult day care, and respite care. Oregon’s waiver services did not have a waiting list and were available to elderly and disabled clients based on functional need, serving about 12,000 elderly and disabled individuals as of June 2002. Appendix I summarizes the home and community-based services available in the four states through their state Medicaid plans or HCBS waivers and whether the state had a waiting list for HCBS waiver services. Most often, the 16 Medicaid case managers we contacted in Kansas, Louisiana, New York, and Oregon offered care plans for our hypothetical individuals that aimed at allowing them to remain in their homes. The number of hours of in-home care that the case managers offered and the types of residential care settings recommended depended in part on the availability of services and the amount of informal family care available. In a few situations, especially when the individual did not live with a family member who could provide additional support, case managers were concerned that the client would not be safe at home and recommended a nursing home or other residential care setting. The first hypothetical person we presented to care managers was an 86- year-old woman, whom we called “Abby,” with debilitating arthritis who is chair bound and whose husband recently died. In most care plans, the case managers offered Abby in-home care. However, the number of offered hours depended on the availability of unpaid informal care from her family and varied among case managers. In the first scenario, Abby lives with her daughter who provides most of Abby’s care but is overwhelmed by also caring for her own infant grandchild. Case managers offered from 4.5 to 40 hours per week of in- home assistance with activities that she could not do on her own because of her debilitating arthritis, such as bathing, dressing, eating, using the toilet, and transferring from her wheelchair. One case manager recommended adult foster care for Abby under this scenario. In the second scenario, Abby lives with her 82-year-old sister who provides most of Abby’s care, but the sister has limited strength making her unable to provide all of Abby’s care. Case managers offered Abby in-home care, ranging from 6 to 37 hours per week. One case manager also offered Abby 56 hours per week of adult day care. In the third scenario, Abby lives alone and her working daughter visits her once each morning to provide care for about 1 hour. The majority of case managers (12 of 16) offered from 12 to 49 hours per week of in-home care to Abby. The other four case managers recommended that she relocate to a nursing home or other residential care setting. The second hypothetical person was “Brian,” a 70-year-old man cognitively impaired with moderate Alzheimer’s disease who had just been released from a skilled nursing facility after recovering from a broken hip. The case managers usually offered in-home care so that Brian could remain at home if he lived with his wife to provide supervisory care. If he lived alone, most recommended that he move to another residential setting that would provide him with needed supervision. In the first scenario, Brian lives with his wife who provides most of his care and she is in fair health. All 16 case managers offered in-home care, ranging from 11 to 35 hours per week. Two case managers also offered adult day care in addition to or instead of in-home care. In the second scenario, Brian lives with his wife who provides some of his care and she is in poor health. All but one of the case managers offered in- home care, ranging from 6 to 35 hours per week. One case manager recommended that Brian move to a residential care facility. In the third scenario, Brian lives alone because his wife has recently died. Concerned about his safety living at home alone or unable to provide a sufficient number of hours of in-home supervision, 13 of the case managers recommended that Brian move to a nursing home or alternate residential care setting. Two of the three care managers who had Brian remain at home offered around-the-clock in-home care—168 hours per week. Table 1 summarizes the care plans developed for Abby and Brian by the 16 case managers we contacted. In some situations, two case managers in the same locality offered notably different care plans. For example, across the eight localities where we interviewed case managers, when Abby lived alone, four case managers offered in-home care while their local counterpart recommended a nursing home or alternative residential setting. The local case managers offering differing recommendations for in-home or residential care also occurred three times when Brian lived alone and once each when Abby lived with her daughter and when Brian lived with his wife who was in poor health. Also, in a few cases, both case managers in the same locality offered in- home care but significantly different numbers of hours. For example, one case manager offered 42 hours per week of in-home care for Abby when she lived alone while another case manager in the same locality offered 15 hours per week of in-home care for this scenario. The home and community-based care that case managers offered to our hypothetical individuals sometimes differed due to state policies or practices that shaped the availability of their Medicaid-covered services. These included waiting lists for HCBS waiver services in Kansas and Louisiana, Louisiana’s daily dollar cap on in-home care, and Kansas’s state review policies for higher-cost care plans. Also, case managers in Oregon recommended alternative residential care settings other than nursing homes, and case managers in Louisiana and New York typically considered Medicare home health care when determining the number of hours of Medicaid in-home care to offer. Neither of our hypothetical individuals would be able to immediately receive HCBS waiver services in Kansas and Louisiana due to a waiting list. As a result, they would often have fewer services offered to them— only those available through other state or federal programs such as those available under the Older Americans Act—until Medicaid HCBS waiver services became available. Alternatively, they could enter a nursing home. The average length of time individuals wait for Medicaid waiver services was not known in either state. However, one case manager in Louisiana estimated that elderly persons for whom he had developed care plans had spent about a year on the waiting list before receiving services. In Kansas, as of July no one had yet come off the waiting list that was instituted in April 2002. When case managers developed care plans based on HCBS-waiver services for our hypothetical individuals, the number of hours of in-home care offered by case managers could be as much as 168 hours per week in New York and Oregon but were at most 24.5 hours per week in Kansas and 37 hours per week in Louisiana. Case managers in Louisiana also tended to change the amount of in-home help offered little even as the hypothetical scenarios changed. This may have been because they were trying to offer as many hours as they could under the cost limit even in the scenario with the most family support available. (See table 2.) Two states’ caps or other practices may have limited the amount of Medicaid-covered in-home care that their case managers offered. For example, case managers in Louisiana tended to offer as many hours of care as they could offer under the state’s $35 per day cost limit. Therefore, as the amount of informal care changed in the different scenarios, the hours of in-home help offered in Louisiana did not change as much as they did in the other states. In Kansas, case managers often offered fewer hours of in-home care than were offered in other states, which may have been in part influenced by Kansas’s supervisory review whereby more costly care plans were more extensively reviewed than lower cost care plans. A Kansas case manager also told us that offering fewer hours of care may reflect the case managers’ sensitivity to the state’s waiting list for HCBS services and an effort to serve more clients by keeping the cost per person low. In contrast, case managers in New York and Oregon did not have similar cost restrictions in offering in-home hours, with one case manager in each state offering as much as 24-hour-a- day care. When recommending that our hypothetical individuals could better be cared for in a residential care setting, case managers offered alternatives to nursing homes to varying degrees across the states. Case managers in Louisiana recommended nursing home care in three of the four care plans in which care in another residence was recommended for Abby or Brian. In contrast, case managers in Oregon never recommended nursing home care for our hypothetical individuals. Instead, case managers in Oregon exclusively recommended either adult foster care or an assisted living facility in the five care plans recommending care in another residence. It was also noteworthy that two case managers in Oregon recommended that either Abby or Brian obtain care in other residential care settings in a scenario when she or he lived with a family member, expressing concern that continuing to provide care to Abby or Brian would be detrimental to the family. Case managers in Kansas, Louisiana, and New York only recommended out-of-home placement for Abby or Brian in scenarios when they lived alone. State differences also were evident in how case managers used adult day care to supplement in-home or other care. For example, across all care plans the case managers developed for Abby and Brian (24 care plans in each state), adult day care was offered four times in New York and Oregon and three times in Kansas. However, none of the care plans developed by case managers in Louisiana included adult day care because it was in a separate HCBS waiver, and individuals could not receive services through two different waivers. Case managers in New York and Louisiana also often considered the effect that the availability of Medicare home health services could have on Medicaid-covered in-home care. For example, one New York case manager noted that she would maximize the use of Medicare home health before using Medicaid home health or other services. Several of the case managers in New York included the amount of Medicare home health care available in their care plans, and these services offset some of the Medicaid services that would otherwise be offered. In Louisiana, where case managers faced a dollar cap on the amount of Medicaid in-home care hours they could provide, two case managers told us that they would include the additional care available under Medicare’s home health benefit in their care plans, thereby increasing the number of total hours of care that Abby or Brian would have by 2 hours per week. While six Kansas and Oregon case managers also mentioned that they would refer Abby or Brian to a physician or visiting nurse to be assessed potentially for Medicare home health, they did not specifically include the availability of Medicare home health in the number of hours of care provided by their care plans. States have found that offering home and community-based services through their Medicaid programs can help low-income elderly individuals with disabilities remain in their homes or communities when they otherwise would be likely to go to a nursing home. States differed, however, in how they designed their Medicaid programs to offer home and community-based long-term care options for elderly individuals and the level of resources they devoted to these services. As a result, as demonstrated by the care plans developed by case managers for our hypothetical elderly individuals in four states, the same individual with certain identified disabilities and needs would often receive different types and intensity of home and community-based care for his or her long-term care needs across states and even within the same community. These differences often stemmed from case managers’ attempts to leverage the availability of both publicly-financed long-term care services as well as the informal care and support provided to individuals by their own family members. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For future contacts regarding this testimony, please call Kathryn G. Allen at (202) 512-7118 or John E. Dicken at (202) 512-7043. Other individuals who made key contributions include JoAnne R. Bailey, Romy Gelb, and Miryam Frieder. Kansas, Louisiana, New York, and Oregon each offered home and community-based services through their state Medicaid plans or HCBS waivers. Kansas and Louisiana had waiting lists that generally made these services unavailable to new clients. Table 3 summarizes the home and community-based services available in the four states we reviewed and whether the states had a waiting list for HCBS waiver services.
As the baby boomers age, spending on long-term care for the elderly could quadruple by 2050. The growing demand for long-term care will put pressure on federal and state budgets because long-term care relies heavily on public financing, particularly Medicaid. Nursing home care traditionally has accounted for most Medicaid long-term care expenditures, but the high costs of such care and the preference of many individuals to stay in their own homes has led states to expand their Medicaid programs to provide coverage for home- and community-based long-term care. GAO found that a Medicaid-eligible elderly individual with the same disabling conditions, care needs, and availability of informal family support could find significant differences in the type and intensity of home and community-based services that would be offered for his or her care. These differences were due in part to the very nature of long-term care needs--which can involve physical or cognitive disabling conditions--and the lack of a consensus as to what services are needed to compensate for these disabilities and what balance should exist between publicly available and family-provided services. The differences in care plans were also due to decisions that states have made in designing their Medicaid long-term care programs and the resources devoted to them. The case managers GAO contacted generally offered care plans that relied on in-home services rather than other residential care settings. However, the in-home services offered varied considerably.
SB/SE was formed to address various issues affecting small business and self-employed taxpayers, such as filing tax returns and paying taxes. SB/SE’s strategic goals include increasing compliance and also reducing burden among SB/SE taxpayers. As part of SB/SE, TEC is to use various strategies, including providing education, outreach, assistance, and other services, to support SB/SE taxpayers in understanding and complying with tax laws. IRS created TEC in response to concerns that IRS should better balance such services with its enforcement efforts. In serving taxpayers, TEC is to partner with government agencies, small business groups, tax practitioner groups, and other stakeholders that could advance its education and outreach efforts. To meet an overall goal of increasing voluntary compliance, TEC’s four program goals or priorities are to combat abusive tax schemes, reduce taxpayer burden, promote electronic filing, and negotiate agreements with SB/SE taxpayers on specific ways to voluntarily comply with tax laws. Recent events underscore the importance of human capital management and strategic workforce planning. For example, we designated strategic human capital management as a governmentwide, high-risk area in January 2001, and it was also placed at the top of the President’s Management Agenda in August 2001. In addition, OMB and OPM have made efforts to improve human capital management and strategic workforce planning. The goal of strategic workforce planning is to ensure that the right people with the right skills are in the right place at the right time. Agency approaches to workforce planning can vary with their particular needs and missions. Nevertheless, looking across existing successful public and private organizations, certain critical elements recur as part of a workforce plan and workforce planning process. Although fluid, this process starts with setting a strategic direction that includes program goals and strategies to achieve those goals and flows through the critical elements to evaluating the workforce plan. Figure 1 uses a simple model to show these critical elements and their relationships to the agency’s overall strategic direction and goals. Before developing a workforce plan, an agency first needs to set a strategic direction and program goals. Setting a strategic direction and program goals is part of the general performance management principles that Congress expects agencies to follow under GPRA. A workforce plan should be developed and implemented to help fulfill the strategic direction and program goals. The critical elements of what this plan should include and how it should be developed follow. Involvement of management and employees: Involving various staff (from the top to the bottom) cuts across the other critical elements. Involving staff in all phases of workforce planning can help improve the quality of the plan because staff are directly involved with the daily operations. Further, vetting proposed workforce strategies to management and those most affected by those decisions can build support for the plan and facilitate obtaining the resources needed to implement the plan and meet program goals. Establishing a communication strategy that involves various staff can create shared expectations and a clear reporting process about the workforce plan. Workforce gap analysis: Analyzing whether gaps exist between the current and future workforce needed to meet program goals is critical to ensure proper staffing. The workforce plan should assess these gaps, to the extent practical, in a fact-based manner. The absence of fact-based analyses can undermine an agency’s efforts to identify and respond to current and emerging challenges. Thus, the characteristics of the future workforce should be based on the specific skills and numbers of staff that will be needed to handle the expected workload. The analysis of the current workforce should identify how many staff members have those skills and how many are likely to remain with the agency over time given expected losses due to retirement and other attrition. The workforce gap analyses can help justify budget and staffing requests by connecting the program goals and strategies with the budget and staff resources needed to accomplish them. Workforce strategies to fill the gaps: Developing strategies to address any identified workforce gaps creates the road map to move from the current to the future workforce needed to achieve the program goals. Strategies can involve how the workforce is acquired, developed and trained, deployed, compensated, motivated, and retained. Agencies need to know their flexibilities and authorities when developing the strategies, and to communicate the strategies to all affected parties. Evaluation of and revisions to strategies: Evaluating the results of the workforce strategies and making any needed revisions helps to ensure that the strategies work as intended. A key step is developing performance measures as indicators of success in attaining human capital goals and program goals, both short- and long-term. Periodic measurement and evaluation provides data for identifying shortfalls and opportunities to revise workforce plans as necessary. For example, an evaluation may indicate whether the workforce plan adequately considered barriers to achieving the goals, such as insufficient resources to hire and train the full complement of staff identified as necessary by the workforce gap analysis. Across the critical elements of a workforce plan, data collection and analysis provide fundamental building blocks. Having reliable data is particularly important to doing the workforce gap analysis. Early development of the data provides a baseline by which agencies can identify current workforce problems. Regular updating of the data enables agencies to plan for improvements, manage changes in the programs and workforce, and track the effects of changes on achieving program goals. IRS issued an Internal Revenue Manual (IRM) section for internal review and comment in March 2003, and IRS expects to finalize it in June 2003. The section outlines a strategic workforce planning system and model, and discusses the roles and responsibilities of IRS and its divisions in this system. For example, IRS is to be responsible for developing the strategic workforce plan across IRS and for analyzing current and future workforce needs. The divisions are to be responsible for providing requested data to IRS’s workforce planning office and for translating the IRS-wide plan into their operations. Thus, a strategic workforce plan for a unit within a division could be developed by IRS, the division, or the unit. If developed by the division or unit, the workforce plan is to be consistent with IRS- wide strategic and workforce plans. Our objective was to determine whether TEC has a workforce plan that conforms to the critical elements for what should be in a plan and how it should be developed and implemented. To meet this objective, we reviewed human capital literature--including OPM’s Human Capital Assessment and Accountability Framework--as well as workforce planning models at OPM, OMB, and IRS, among others; reviewed TIGTA and GAO reports on human capital and workforce planning; reviewed IRS and SB/SE documents on their strategic program plans, the plan that guided TEC’s creation and initial staffing, and the annual TEC staffing plan as well as IRS’s draft IRM section on strategic planning and workforce analyses (section 6.251) as of March 2003; and interviewed SB/SE and TEC officials on their goals, strategies, and staffing plans as well as IRS and SB/SE Workforce Council officials to determine their purposes, activities, time lines, and challenges. We conducted our work at IRS and SB/SE headquarters from February 2003 through April 2003 in accordance with generally accepted government auditing standards. We did not attempt to analyze the adequacy of any analyses done to develop a workforce plan for TEC or the program goals and strategies. The Commissioner of IRS provided comments on a draft of this report, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Since its inception in October 2000, TEC has operated with short-term staffing plans that do not meet the critical elements of what a strategic workforce plan should include and how it should be developed. IRS and SB/SE are taking steps to develop a strategic workforce plan that will include TEC. However, questions remain about how the critical elements will be developed and implemented for TEC. TEC does not have a strategic workforce plan that includes the critical elements, such as analyses of the workforce gaps and strategies. Without such a workforce plan, TEC has less assurance that it has the necessary workforce to meet its current program goals and to manage changes in its programs and goals. IRS and SB/SE officials said that TEC does not have a strategic workforce plan because of the effort in creating the division and its units such as TEC to meet SB/SE taxpayer needs. These officials said this effort has been a significant undertaking, which delayed the workforce planning. SB/SE officials also said that they needed to have some experience with TEC as a new unit and some data on its new TEC workforce before developing a strategic workforce plan for TEC. Since its inception, TEC has operated under two types of staffing plans that did not use the critical elements of a workforce plan. One plan was developed prior to TEC’s creation in October 2000 to guide the hiring and allocation of 1,209 full-time positions for TEC. The other plan annually allocates the number of TEC staff to its various locations, functions (e.g., partnership outreach or marketing service) and four priorities (e.g., combat abusive tax schemes and promote electronic filing). Although both plans reflect analyses of the number of TEC staff by location, these plans did not address what a TEC workforce plan should include under the critical elements. For example, the plans did not identify any gaps in the workforce needed, any strategies to fill the gaps, or any measures for evaluation purposes. Recognizing the need for workforce planning, both IRS and SB/SE are developing strategic workforce plans and a planning process for TEC and other IRS entities that broadly reflect the critical elements. However, questions remain because of the lack of details on how any workforce plan for TEC will address the critical elements. IRS and SB/SE each convened workforce planning councils, consisting of executives and human capital managers, to oversee the development of a strategic workforce plan that would include TEC. IRS started its council in the fall 2001 at the direction of the IRS commissioner. SB/SE started its council in February 2003 to create a more detailed workforce plan for TEC and its other units than would be provided in the IRS-wide plan. Our review of IRS and SB/SE documents showed that they both intend to use the critical elements of strategic workforce planning. These documents include models and discussion that reference the critical elements. For example, these models refer to elements such as analyzing the gap in the workforce and developing strategies to reduce the workforce gap. Although IRS and SB/SE are taking steps to develop a strategic workforce plan for TEC, these steps have not yet produced enough details to specify how the critical elements will be developed and implemented for TEC. IRS and SB/SE officials said that they recognize the need to further define how the strategic workforce plan will be developed and implemented over time. For example, the degree to which top management and employees will be involved in developing and implementing the workforce plan for TEC is not yet clear. The draft IRM section refers to their involvement but does not provide details on the extent and nature of their involvement. As for identifying any workforce gaps at TEC, it is not clear what analyses will be done. As of April 2003, neither IRS nor SB/SE has analyzed the type of TEC workforce needed in the future to meet program goals or the skills of the current TEC workforce. Both types of analyses are needed to determine the gap between the current TEC workforce and the workforce needed in the future. Nor is it clear how and when these analyses will be done. SB/SE officials said that given resource limitations, they have not done the necessary workforce analyses for TEC or developed an implementation schedule for when the analyses would be done. As of April 2003, IRS or SB/SE analyses have dealt with other workforce issues. While useful, the analyses do not address the TEC workforce gap in terms of the skills needed now or in the future to meet program goals, particularly newer ones such as promoting electronic filing or negotiating voluntary compliance agreements. For example, IRS has analyzed 12 mission-critical positions in terms of potential losses (e.g., retirement) from the current number of positions. These analyses have not focused on TEC because the analyses, as well as the eventual IRS-wide workforce plan, are intended to be done at a high level with minimal references to TEC. SB/SE asked officials in TEC and its other units in February 2003 to use a checklist to self-assess their current workforce and planning capabilities against OPM criteria. SB/SE has not indicated how it will verify and use the subjective check marks made by the officials to determine workforce gaps in TEC, particularly in skills needed. No analyses have been provided to justify plans for fiscal year 2004 to hire 250 additional staff in TEC to combat abusive tax schemes and to not hire any additional staff to address three other TEC goals. IRS and SB/SE workforce officials had told us that the 250 staff estimate came from the budget and finance staff in SB/SE. In a subsequent meeting during May 2003, TEC and SB/SE officials said that IRS has decided against any staff expansion in TEC due to other budget considerations. Finishing the analyses of TEC workforce gaps is important for the rest of the workforce plan. The other two critical elements involving strategies and evaluation cannot be finished until IRS and SB/SE know the specific needs of the current and future TEC workforces. As IRS and SB/SE officials develop and implement a workforce plan for TEC, major challenges are likely to arise. For example, these officials cited the challenge of balancing daily operational demands with the capacity to forecast workforce needs in terms of staff numbers, skills, and locations. Another challenge is gathering reliable data on the attrition, retirement, and skills of the current workforce to do analyses that are critical to workforce planning. IRS and SB/SE officials also pointed to budget fluctuations that could limit their strategies to close gaps in the workforce needed by TEC over time. For example, the budget may be insufficient to replace losses of TEC workforce skills due to retirement. Finally, they said that if the workforce plan could adversely affect current TEC employees, dealing with employee unions to address the concerns could be a challenge. We have reported on these and other challenges that any agency faces in doing successful workforce planning. As discussed in our previous reports, and echoed by OPM and OMB guidance, a strategic workforce plan enables an agency to identify gaps in its current and future needs, select strategies to fill the gaps, and evaluate the success of the plan to make revisions that may be needed to better meet program goals. Such a workforce plan does not yet exist for TEC. Without such a plan, TEC is less likely to have the right number of staff with the right skills in the right places at the right time to address its priorities. Further, it is difficult to justify budget and staffing requests if the workforce needs are not known. IRS and SB/SE have started taking steps to develop a strategic workforce plan for TEC based on the critical elements under OPM and OMB guidelines, and our guidelines for what a plan is to include and how it is to be developed and implemented. However, IRS and SB/SE have not yet identified many details on how the plan for TEC will incorporate the elements. Without these details, we cannot be certain that the critical elements will be used and contribute to the program goals. Given the uncertainty on how the workforce plan for TEC will be developed and implemented, we recommend that the Commissioner of Internal Revenue ensure that the workforce plan for TEC be developed in conformance with the critical elements for what a plan should include and how a plan should be developed and implemented. We requested comments on a draft of this report from IRS. The Commissioner of Internal Revenue provided written comments in a letter dated May 28, 2003. (See appendix I.) These comments neither explicitly agreed nor disagreed with our recommendation to ensure that a workforce plan for TEC is developed in conformance with the critical elements of what a plan should include and how it should be developed and implemented. The Commissioner did say that IRS strongly endorses the development of a strategic workforce plan and that IRS has made progress on this effort, listing eight steps that have been taken. The Commissioner also said that the steps were a set of integrated strategies that reflect IRS’s commitment to improve its workforce planning efforts and that they addressed the issues raised in our report. To the extent that IRS had told us about how these steps contributed to a workforce plan for TEC, our report discusses them when we describe IRS’s efforts to create such a plan using the critical elements. Although we believe that these steps are useful, we made our recommendation because we did not see enough details to be assured that a workforce plan for TEC would be sufficiently developed and implemented in accordance with the critical elements. We are encouraged that IRS strongly endorses development of a strategic workforce plan. We look forward to seeing a workforce plan for TEC. As we agreed with your staff, unless you publicly release the contents of this report earlier, we will not distribute it until 30 days after its issue date. At that time, we will send copies of this report to the Ranking Minority Member of the Senate Committee on Small Business and Entrepreneurship. We will also send copies to the Commissioner of Internal Revenue and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Thomas Short, Assistant Director. Other major contributors include Catherine Myrick and Grace Coleman. If you have any questions or would like additional information, please contact me at (202) 512-9110 or brostekm@gao.gov or Thomas Short at (202) 512-9110 or www.shortt@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Strategic workforce planning helps ensure that agencies have the right people with the right skills in the right positions to carry out the agency mission both in the present and future. The Internal Revenue Service's (IRS) Taxpayer Education and Communication (TEC) unit within its Small Business and Self- Employed Division assists some 45 million small business and self-employed taxpayers. Given the number of taxpayers it is to assist and changes in its priorities and strategies, GAO was asked to determine whether TEC has a workforce plan that conforms to critical elements for what should be in a plan and how it should be developed and implemented. Although it has existed for more than 2-and-a-half years, TEC does not have a strategic workforce plan that includes certain critical elements. For example, it has not identified gaps between the number, skills, and locations of its current workforce and the workforce it will need in the future, and the strategies to fill gaps. Such a workforce plan for TEC could be developed by IRS, the Small Business and Self-Employed Division, and/or TEC. Small Business and Self-Employed Division officials said that TEC does not have a strategic workforce plan because they focused on creating the division and units such as TEC to begin addressing taxpayer needs, and because they first wanted to gain some experience with TEC as a new unit. IRS and the Small Business and Self-Employed Division are creating a process for developing a workforce plan for TEC that in broad terms would incorporate the critical elements common to workforce planning. However, it is not yet clear whether the workforce plan for TEC will be developed and implemented consistent with these critical elements. For example, IRS and the Small Business and Self-Employed Division have not analyzed the skills that the TEC workforce will need to meet its program goals or outlined the process and data to be used to do these analyses.
CPSA created CPSC to regulate consumer products and address those that pose an unreasonable risk of injury; assist consumers in evaluating the comparative safety of consumer products; and promote research and investigation into the causes and prevention of product-related deaths, injuries, and illnesses. CPSC’s jurisdiction is broad, covering thousands of types of consumer products used in and around the home and in sports, recreation, and schools.regulated—that is, subject to mandatory standards governing performance or labeling requirements established by CPSC through regulations. In contrast, many consumer products that are under CPSC’s Some consumer products are jurisdiction are subject to voluntary standards, which are generally determined by standard-setting organizations, with input from government representatives and industry groups. Unregulated products are those products not subject to mandatory standards and may include those covered by voluntary standards. The 1981 amendments to CPSA required CPSC to defer to a voluntary standard rather than promulgating a mandatory standard through rulemaking if CPSC determines that (1) the voluntary standard adequately addresses the hazard and (2) there is likely to be substantial compliance with the voluntary standard. To address product hazards, CPSC may attend the meetings of standard- setting organizations and contribute relevant hazard data to assist in the development of voluntary standards, but staff are not permitted to vote on the standards or hold leadership positions. CPSC has broad authority to identify, assess, and address hazards associated with consumer products under the following laws: Consumer Product Safety Act (CPSA), which consolidated federal safety regulatory activity relating to consumer products within CPSC; Consumer Product Safety Improvement Act (CPSIA) of 2008, which amended CPSA to, among other things, expand CPSC’s authorities to address consumer product safety risks and direct the agency to develop a risk assessment methodology to identify hazardous imports; Flammable Fabrics Act, which, among other things, authorizes CPSC to prescribe flammability standards for clothing, upholstery, and other fabrics; and Federal Hazardous Substances Act, which establishes the framework for the regulation of substances that are toxic, corrosive, combustible, or otherwise hazardous. Other laws provide CPSC with authorities to prescribe performance standards for specific consumer products. In addition, CPSIA required CPSC to promulgate mandatory standards for durable infant and toddler products—such as cribs and strollers—through rulemaking in accordance with section 553 of the Administrative Procedure Act (APA), rather than the rulemaking procedures required by CPSA.governs “informal” or “notice and comment” rulemaking procedures for federal agencies and, according to CPSC officials, does not impose the Section 553 of the APA cost-benefit requirements specified in the rulemaking procedures in CPSA. When addressing a consumer product hazard, CPSC generally assesses whether it is known, new, or emerging. New or emerging hazards may be associated with either a new or existing product. For example, a new hazard could present itself in the form of new materials used to manufacture an existing product. CPSC’s Emerging Hazards Team— composed of statisticians—is responsible for reviewing incident reports to identify new and emerging product-associated hazards, performing product safety assessments, and directing new reports to appropriate Integrated Product Teams. The Emerging Hazards Team’s review is one of CPSC’s first steps in identifying the nature of a hazard. According to CPSC staff, the Emerging Hazards Team reviews all reports of incidents stemming from consumer products on a daily basis, including those stored in CPSC’s data management system, to identify trends and patterns. Integrated Product Teams are composed of subject-matter experts from a number of offices within CPSC and are organized by type of hazard. The teams are responsible for a variety of risk-related activities, including reviewing incident reports, requesting investigations, recommending new activities to management as needed, and monitoring follow-up status on corrective actions and the status of projects for standard development. standards, CPSC compliance staff conduct searches of the Internet and monitor online retailers. CPSC also monitors risks through agreements with other federal and state agencies to conduct research. For example, CPSC has a joint agreement with the Environmental Protection Agency (EPA) to research the health effects of nanotechnology in consumer products. In addition, CPSC staff attend trade shows to identify possible products of interest and exchange information about consumer products with a number of other federal agencies, including the National Institutes of Health and the Centers for Disease Control and Prevention. CPSIA mandated that CPSC, in tandem with Customs and Border Protection (CBP), develop a risk assessment methodology to identify products intended for import into the United States that are likely to violate consumer product safety laws enforced by CPSC.the agencies developed an import surveillance data system, known as Risk Assessment Methodology (RAM), and began to pilot it in 2011. The purpose of RAM is to evaluate products entering the United States based on criteria designed to identify imports with the highest risk to consumers. The criteria are determined through CPSC’s analysis of its historical data on consumer product risks and CBP’s advance shipment data. Currently, CPSC staff have access to CBP data systems and request data extracts, as necessary. CPSC generally evaluates consumer products to determine whether products present risks to consumers and how they should be addressed, such as through a voluntary standard, a consumer product safety standard, or ban by regulation to prevent or reduce an unreasonable risk. According to CPSC, it uses a multifaceted approach to reduce the risk of injury to consumers that is intended to address both immediate and future problems stemming from the risk. CPSC’s actions to address and reduce the risk of injury to consumers include the following: Compliance—CPSC conducts voluntary and mandatory recalls, enforcement of existing regulations by seeking civil and criminal penalties, and injunctive relief against prohibited acts. Standards and Rulemaking—As previously discussed, CPSC participates in the voluntary standards process, and develops mandatory safety standards, and product bans through rulemaking. Public education—CPSC notifies the public of safety hazards and educates them about safe practices. Certain aspects of CPSC’s authorities, as well as other factors, impact how quickly CPSC responds to new and emerging hazards, including (1) compliance actions involving litigation, (2) reliance on voluntary standards, (3) rulemaking procedures, and (4) information-sharing restrictions. In addition, CPSC commissioners (former and current), consumer groups, and industry representatives have stated that limited resources can prolong the time it takes CPSC to respond to new and emerging hazards. CPSC can take a number of compliance actions to address unsafe products, including conducting voluntary recalls, mandatory recalls, and mandatory bans. Generally, CPSC negotiates the terms of voluntary recalls with manufacturers of products that have been identified to be hazardous or in violation of voluntary or mandatory standards. The company submits its corrective action plan to CPSC indicating how it plans to repair, refund, or replace the product. CPSC then reviews and, if necessary, negotiates the terms of the manufacturer’s proposed corrective action plan before approving it. According to CPSC staff and one commissioner we interviewed, because both CPSC and the manufacturer are seeking acceptable terms, the negotiations involved in the voluntary recall process could add time to CPSC’s response to a new or emerging product hazard. If CPSC and a manufacturer are unable to reach agreement on how to address a consumer product safety hazard through the voluntary recall process, the agency may pursue compliance actions through administrative hearings or in district court that could result in a number of remedies, including mandatory recalls and bans, and further increase the agency’s response time. While mandatory recalls, once imposed, may be used to remove hazardous products from the marketplace, according to CPSC officials, litigating such an action may lead to lengthy delays. For this reason, CPSC officials said that the agency typically pursues compliance actions that involve litigation as a last resort because such actions generally require additional time and resources. For example, in 2009, CPSC staff began learning about incidents of toddlers and young children ingesting small, loose, high-powered, rare earth magnets that were marketed to consumers aged 13 and older. In 2010, CPSC worked to obtain agreements with a number of retailers to voluntarily stop selling the product. After the agency continued to receive reports about ingestions and injuries, it issued a public safety alert in 2011. However, in July 2012, CPSC announced that attempts to negotiate a voluntary recall with one of the manufacturers of the high-powered magnets had failed. CPSC concluded that product warning labels and public education efforts were ineffective and could not prevent further injuries and incidents. As a result, CPSC staff filed an administrative complaint—the second in 11 years for any product, according to CPSC officials—which the commissioners approved, seeking a determination that the product constituted a substantial product hazard and that the firm stop selling the The manufacturer refused to product and offer consumers a full refund.submit to the conditions, and litigation continued for almost 2 years before CPSC reached a settlement in May 2014, ordering the former owner of the company to fund a trust to refund consumers. Officials with whom we spoke stated that CPSC may also address new and emerging risks through its imminent hazard authority. Specifically, if a consumer product is “imminently hazardous”—defined as a consumer product that presents an imminent and unreasonable risk of death, serious illness, or severe personal injury—CPSC may file an action in U.S. district court. If the court declares the product to be imminently hazardous, it may grant temporary or permanent relief—such as seizure of the product, recall, or public notice of the risk—to protect the public from the hazard. According to CPSC officials, the time needed to file the required legal actions and work through the courts using the imminent hazard authority prolongs the time CPSC takes to respond to new and emerging risks. Further, they noted that the legal standard required to prove that a product is an imminent hazard requires extensive data analysis and is difficult to prove in court. CPSC officials said that the agency attempted to use its imminent hazard authority one time, in 1986, to address hazards related to lawn darts, but was unsuccessful. In 1970, prior to the creation of CPSC, FDA, under its authority to administer the FHSA, issued a regulation banning lawn darts other than those that were not intended for use as a toy and were marketed solely for adults. CPSC later assumed responsibility for administering the FHSA and continued to enforce FDA’s regulations regarding the ban on lawn darts as well as the exemption from that ban. After receiving several reports that lawn darts were being sold in toy stores, between 1984 and 1987, CPSC inspected nearly 200 retailers throughout the U.S. and found numerous violations of both the labeling and marketing requirements for lawn darts. In 1987, CPSC staff met with manufacturers of lawn darts and discussed several voluntary actions that could be taken to assure firms’ compliance with the exemption from the ban, including making the warning label more conspicuous. Despite these efforts, continued reports of fatalities due to injuries involving lawn darts, according to CPSC, led Congress to question the adequacy of the ban. In an effort to prevent further injuries and fatalities, in 1988 CPSC issued a ban, through its rulemaking authority, on the sale of all lawn darts.Subsequently, CPSC received new reports of injury, and in March 2012 the agency issued a safety alert to reiterate the ban. As previously discussed, consumer product safety laws require CPSC to rely on voluntary standards if it determines that (1) compliance with a voluntary standard would eliminate or adequately reduce the risk of injury identified and (2) there is likely to be substantial compliance with the voluntary standard. In addition, the agency may address the risk presented by unregulated products—that is, products not subject to mandatory standards—by recommending revisions to voluntary standards. However, if a voluntary standard does not address the particular defect or hazard that is being examined, the process of taking a corrective action to address the hazard is prolonged. In some instances, CPSC may find that a product meets a voluntary standard but still has a defect that creates a serious risk of injury or death, but the manufacturer may disagree. Because standards are voluntary, CPSC cannot legally compel a manufacturer to comply with a voluntary standard or take action against it for noncompliance. According to CPSC officials and staff, the nature of voluntary standards may extend the amount of time the agency takes to properly address new and emerging risks in consumer products. CPSC does not control the voluntary standards development process, and the laws do not establish a time frame within which standard- development organizations must finalize a voluntary standard. As a result, the voluntary standards development process can, in some instances, last for prolonged periods of time. For example, CPSC has worked with the window-covering industry since 1994 to develop voluntary standards to address strangulation hazards stemming from window blind cords, but conflicting consumer and industry goals have prolonged the process. The first voluntary standard to address this hazard was developed in 1996 and has been revised at least six times. However, some consumer groups argue that none of the revisions include designs aimed at eliminating the strangulation risk. Between 2007 and 2011, CPSC negotiated with 38 individual companies to voluntarily recall hazardous window blinds and issued multiple consumer safety alerts about hazards related to window blind cords. Consumer groups have asked standard-setting organizations to consider technologies, such as cordless window coverings, that would eliminate window cord-related hazards. Some manufacturers have said that while cordless window blinds would eliminate the hazard, a voluntary standard asking manufacturers to produce such window coverings would be too costly for some firms and could create a product that would be unaffordable for some consumers. In 2011, a coalition of consumer groups announced that they had withdrawn from the voluntary standard development process because it lacked transparency, and because resulting revisions to the standard still did not consider existing technologies that could eliminate strangulation hazards from accessible cords. In May 2013, the coalition of consumer groups petitioned CPSC to promulgate a mandatory standard because, according to the petition, the voluntary standards process had failed to develop a standard that eliminated or significantly reduced the strangulation risk. As of September 2014, CPSC continued its efforts to work with standard-setting organizations to develop a new voluntary standard. CPSC indicated in its proposed budget request for fiscal year 2015 that staff planned to include a response to the consumer groups’ petition for a mandatory standard for window-coverings in a briefing package to be considered by the commission. CPSC’s rulemaking procedures, as outlined in CPSA, often lengthen the time the agency takes to respond to new and emerging risks. According to CPSC officials, the time required for mandatory standard rulemaking varies depending on multiple factors, including the complexity of the problem to be addressed; the volume of public comments responding to a proposed rule; time constraints imposed by other federal statutes, executive orders, or other administrative obligations; agency resources; and competing agency priorities. Under CPSA, CPSC shall not promulgate a rule, including a mandatory consumer product safety standard, unless it finds that all of the following conditions exist: The rule is in the public interest. The rule is reasonably necessary to eliminate or reduce an unreasonable risk of injury associated with the product. If a voluntary standard exists, compliance with such voluntary standard is not likely to eliminate or adequately reduce the risk of injury, or it is unlikely that there will be substantial compliance with such voluntary standard. The rule’s expected benefits bear a reasonable relationship to its costs. The rule imposes the least burdensome requirement that prevents or adequately reduces the risk of injury at issue. Additionally, any final rule must include a regulatory analysis describing (1) the potential benefits and costs of the rule; (2) any alternatives to the rule that CPSC considered, as well as the costs and benefits of the alternatives to the rule and why they were not chosen; and (3) significant issues raised by public comments submitted in response to the preliminary regulatory analysis and CPSC’s assessment of the issues. Some CPSC officials said that the required cost-benefit analysis is lengthy and resource intensive. These officials stated that requirements to explore possible alternatives to a new consumer product standard and completing the corresponding cost-benefit analysis proved to be time- consuming elements. For example, CPSC has been considering a mandatory rule to address the risk of fire associated with ignitions of upholstered furniture since 1994. However, action has yet to be taken because, according to one commissioner, demonstrating the efficacy of the risk-reduction alternatives is difficult. The commissioner cited CPSC’s efforts to address risks associated with flammable upholstered furniture, in particular, because options to address the hazard include manufacturers’ use of flame retardant chemicals, which some scientific studies have indicated could cause cancer in humans. Specific sections of CPSA restrict CPSC’s ability to disclose certain information about potential product hazards, which in turn may impact CPSC’s ability to notify the public about new and emerging risks and prolong the time it takes for CPSC to respond to new and emerging risks. Section 6(b) of CPSA generally prohibits CPSC from publicly disclosing information that would readily identify a product manufacturer unless CPSC first takes reasonable steps to ensure that the information is accurate, and that the disclosure is fair in the circumstances, and reasonably related to carrying out CPSC’s purposes under its jurisdiction. Before publicly disclosing information, CPSC is required to provide the manufacturer advance notice and opportunity to comment on the accuracy of the information. If CPSC decides to disclose information that the manufacturer claims to be inaccurate, it generally must provide 5 days advance notice of the disclosure, and the manufacturer may bring suit to prevent the disclosure. Some consumer representatives and CPSC officials we interviewed said that these confidentiality requirements in CPSA may prolong the time it takes to get hazardous products out of consumers’ homes because CPSC is prohibited from releasing the name of the product or manufacturer until it has followed the 6(b) procedures or until the manufacturer has waived any objections to the information’s release. 15 U.S.C. § 2078(e). concluded that CPSC has been unable to complete certain information- sharing agreements with foreign counterparts because it cannot offer them reciprocal terms on disclosure of nonpublic information.reported that CPSC’s inability to establish information-sharing agreements with its foreign counterparts may hinder the agency’s ability to respond to a potential hazard in a timely manner because of the delay that might occur between when a foreign counterpart decides to take action in response to a product hazard and when that action becomes public. In that report, we also concluded that to better enable CPSC to target unsafe consumer products, Congress may wish to amend section 29(f) of CPSA to allow CPSC greater ability to enter into information- sharing agreements with its foreign counterparts that permit reciprocal terms on disclosure of nonpublic information. Our report concluded that this restriction on sharing information may hinder CPSC’s ability to identify risks from new products in a timely manner, possibly leading to injury and death if unsafe products enter the U.S. market. As of September 2014, there have been no changes to section 29(f) of CPSA. In addition to factors previously discussed, CPSC’s ability to respond to new and emerging risks in a timely manner depends on the resources required to understand the nature of and address the specific product hazard. For example, the simplest risk assessments, such as lead testing, may require few resources to complete. However, assessing complex products, such as those involving phthalates, may require additional time, staff, and laboratory resources because the agency may need to develop new standards or consult outside scientific expertise in areas such as toxicology and epidemiology.U.S. imports under the agency’s jurisdiction increased by ten percent in two years—from about $637 billion in calendar year 2010 to $706.6 billion in calendar year 2012. In addition, CPSC’s full-time-equivalent staff generally decreased between fiscal years 2000 and 2008—from 492 to 396—until fiscal year 2009, when CPSC saw an increase in staff, to 435. As of September 2014, CPSC has 528 full-time equivalent staff, which is 41 percent less than the 890 full-time equivalent staff the agency had in CPSC data indicate that the dollar value of 1975. Further, many of the CPSC commissioners, consumer groups, and industry representatives we spoke with stated that CPSC currently lacks the staff, laboratory resources, and related funding to conduct risk assessments more efficiently than it currently does. According to these sources, CPSC’s lack of sufficient staff with scientific expertise could also prolong the time the agency takes to assess product hazards and ultimately address new and emerging risks. We reported in 2012 that CPSC has taken steps to improve its responsiveness through better technologies for identifying risks, more targeted surveillance of imported products, and a program for manufacturers to streamline the process for conducting recalls. According to CPSC, in fiscal year 2013, the pilot RAM helped port investigators to identify and prevent more than 12.5 million units of violative imports from entering the U.S. stream of commerce. In addition, since our December 2012 report on CPSC’s risk assessment activities, CPSC has stationed an additional full-time investigator at another port, for a total of 21 investigators at 16 of the 327 ports of entry. However, CPSC has reported that increased resources would help to expand these efforts. For example, according to CPSC, the pilot RAM import surveillance program is focused on import surveillance and compliance, but the fully developed program would emphasize prevention and programs that provide incentives for importers to implement preventive actions to improve product safety and better ensure legal and regulatory compliance. CPSC requested additional funding in its 2015 congressional budget request to expand the RAM and reports that additional funding would increase its capacity for laboratory sample testing and software acquisition for the RAM. Over the years, stakeholders and observers have proposed various options to shorten the time CPSC takes to respond to new and emerging hazards. Some options, such as expanding CPSC’s use of regulatory approaches to prevent product safety hazards, may pose more challenges to implementation than others, such as enhancing CPSC’s resources to address product hazards, which could be achieved within the existing regulatory framework. We asked a range of consumer and industry representatives about the viability of these regulatory options as well as others that could be used to improve CPSC’s timeliness. Some options have the potential to prevent hazardous products from entering the market; and some could reduce the number of deaths and injuries from these products. But each also involves trade-offs that should be considered. These options and their trade-offs are summarized in figure 1. According to CPSC, preventing hazards from entering the marketplace is one of the most effective ways the agency can protect consumers. CPSC reports that many consumer product hazards and safety defects arise in the very early stages of the supply chain, including product design and the selection and use of raw materials. As discussed earlier, CPSC addresses product hazards after the product has entered the market and after a specific product hazard has been identified. According to CPSC, given the large volume and diversity of products under CPSC’s jurisdiction, recalls and bans alone may not prevent product hazards from occurring. CPSIA mandated a preventive approach to the development and marketing of certain juvenile products by requiring that manufacturers of such products, prior to importing or distributing them for sale, submit samples of their product for third-party testing and certify that, based on such testing, their product complies with applicable safety standards. The precautionary principle approach is a preventive framework for guiding decision making that is used in some policy areas in the United States, such as the regulation of environmental policy and drugs, and more broadly in other countries, such as the European Union. The precautionary principle approach to regulation generally specifies that when a product, technology, or activity presents the possibility of severe or irreversible risk to human health or the environment, precautionary measures should be taken to reduce or eliminate the risk, even if the cause and effect are not fully understood. For example, Sweden has taken an approach to reducing the occurrence of pharmaceuticals in drinking water that is based on the precautionary principle. Sweden has taken steps, such as encouraging that prescriptions be written in smaller amounts to limit the amount of unused pharmaceuticals patients dispose of, even though there is no scientific evidence that the occurrence of pharmaceuticals in the environment is affecting human health. According to the precautionary principle, action should be taken preventively because definitive knowledge about causation might take decades of further research. In the United States, a consumer product is generally allowed on the market unless sufficient evidence can be presented to demonstrate that that product is unsafe. Under a precautionary principle approach, a product would generally not be allowed on the market unless sufficient evidence could be presented to demonstrate that it is safe. Further, the precautionary principle approach generally places the burden of proving the safety of a new product, technology, or activity on its proponents—such as manufacturers and distributors—rather than on the regulator. Premarket approval is an application of the precautionary principle in which certain products must be tested and approved before they can be marketed to consumers. In the United States, regulatory agencies such as FDA and EPA require that some products undergo an approval process. As part of the premarket approval process, regulators establish specific safety standards that products must meet before being approved for marketing to consumers. As in the precautionary principle approach, manufacturers generally bear the burden of demonstrating—with reasonable certainty and through sufficient scientific data and other requirements—that the product will not harm consumers. Regulators, then, must evaluate the data as part of the product approval process. A few consumer representatives we interviewed said a broad application of the precautionary principle approach to regulating consumer goods could decrease the number of products that come to market with unknown safety hazards and that as a result, injuries and deaths from hazardous products could decrease. However, the majority of the industry and consumer representatives we spoke with did not believe it was wise for CPSC to fully implement a precautionary principle approach. Representatives discussed various challenges the approach could present. For example, one representative said the approach would require manufacturers to incorporate into a product’s design a means of addressing all of the ways that consumers could potentially be harmed by using the product; in practice, the representative noted, manufacturers improve upon a product’s safety by learning how consumers actually use it. In addition, an industry representative noted that manufacturers could face significant additional costs from requirements to design and conduct an unknown number of new safety tests applicable to each new product. Several representatives said that, given the vast number of products under CPSC’s jurisdiction, the agency would need a significant increase in staff and budget to facilitate the ability to evaluate risk assessment data for all new consumer products. A commissioner and an industry representative also expressed concern about underlying assumptions that the approach would result in improvements over the current regulatory scheme. For example, both noted that because CPSC’s actions are already driven by data, a precautionary principle approach may not necessarily result in better outcomes. In addition, CPSC officials said that it would be unrealistic for the agency to implement a premarket approval process for all consumer goods given the vast number of products under CPSC’s jurisdiction. However, officials noted that a focused application of premarket approval on a specific product line, such as for cribs, could be an acceptable approach. An industry representative, a consumer representative, and a commissioner commented that certain children’s products regulated by CPSC are already subject to a process similar to premarket approval because of CPSIA’s requirement that such products meet third-party testing for compliance with applicable safety standards before they can be marketed to consumers. Two consumer representatives and a consumer safety expert we interviewed said that implementing a premarket approval process for all consumer products could, in theory, prevent hazardous products from entering the market and potentially reduce related injuries and deaths. However, most representatives we interviewed agreed that implementing premarket approval for all consumer products would not enable CPSC to respond to new and emerging hazards faster than it currently does. Some representatives said that such an approach could, among other things, increase (1) the time CPSC takes to respond to product risks, (2) the agency’s costs, and (3) the time for new products to come to market. In addition, one commissioner said that CPSC lacks the laboratory testing capacity to effectively implement a full premarket approval process. An industry representative also noted that a full premarket approval process would require CPSC to develop standards for testing all products, which would not be consistent with the current regulatory framework requiring CPSC to rely upon voluntary standards. CPSC officials said that the nature of CPSC’s work already incorporates some preventive measures because the agency relies upon an array of data and scientific analyses to determine the best way to reduce or prevent consumer safety risks. One industry representative noted that CPSC used a preventative approach to address risks stemming from the use of phthalates, as required by statute, much like the precautionary principle prescribes. Specifically, in 1998, CPSC asked manufacturers to remove specific types of phthalates from certain juvenile products, such as soft rattles and teething rings, after its study identified areas of scientific uncertainty about negative effects of the chemical on human health. In 2008, Congress passed a ban on certain phthalates in children’s toys until CPSC could convene an independent advisory panel to study the chemicals’ effects. The study, published in July 2014, concluded that additional phthalates should also be permanently banned and others should be subject to further risk assessment. As previously discussed, CPSIA mandated that CPSC promulgate mandatory standards for durable infant and toddler products using rulemaking procedures in the Administrative Procedure Act (APA) which, according to CPSC staff, lack the cost-benefit analysis requirements specified in the rulemaking procedures in CPSA. CPSC staff said that APA rulemaking procedures enabled them to promulgate the mandated standards more easily and quickly than if they had been required to use CPSA’s procedures. Officials expressed interest in expanding CPSC’s authority to promulgate rules for other types of products with existing voluntary standards using APA rulemaking procedures. Several consumer representatives and a commissioner supported this idea and said if CPSC had such authority, its response to new and emerging hazards could be timelier. However, CPSC officials noted that expanding its use of this authority could inhibit the industry’s access to CPSC and due process as provided by section 9 of CPSA. Opinions vary among current and former commissioners about the extent to which expedited rulemaking authority should be used. At least two commissioners have said that the APA procedures provided CPSC with more flexibility to quickly issue the mandatory standards as required by CPSIA and one said that they could expedite its consideration of others, such as a standard to address the flammability of upholstered fabric. Conversely, one commissioner has said that CPSA’s cost-benefit analysis requirements make agencies, like CPSC, take the costs and benefits of their regulations more seriously before finalizing them; and another commissioner commented that CPSA’s cost benefit analysis is an important part of rulemaking because it requires agencies to identify a rational justification for proposing a rule. In September 2011, CPSC staff submitted a report to Congress that recommended several statutory changes designed to improve CPSC’s ability to identify hazardous products at the ports of entry and prevent them from entering the marketplace. For example, CPSC staff proposed that CPSC be granted authority to detain products at the ports like other federal agencies, such as Customs and Border Protection. Given the agency’s relatively small staff size, another proposal suggested giving CPSC authority to commission employees of other federal agencies to assist in the agency’s investigations and inspections to allow for greater enforcement efficiency. In August 2014, CPSC officials confirmed that they continue to support all of the statutory changes recommended in the 2011 report. Almost half of the 23 consumer and industry representatives and commissioners we interviewed expressed an opinion about whether the proposed statutory changes would enable CPSC to respond more quickly to new and emerging risks. Of these, about half generally supported efforts to improve CPSC’s import surveillance authorities. For example, a consumer safety expert said that improvements to CPSC’s import surveillance authorities could improve opportunities to stop hazardous products from entering the markets. However, a commissioner and a consumer safety expert we interviewed did not support the proposed statutory changes and said that they would not help CPSC address new and emerging risks. The commissioner said that new and emerging product risks are rarely identified at the ports. The consumer safety expert expressed concern that some of the statutory changes would add an undue cost burden particularly to the juvenile products industry. According to this representative, one of the statutory changes could result in manufacturers of certain children’s products incurring costs to have containers marked “refused for entry” in addition to the costs of compliance with CPSIA’s requirement that juvenile products be tested by third-party facilities to ensure compliance with applicable safety standards. Specifically, CPSC’s September 2011 report included a proposal to add a new statutory provision to both CPSA and the Federal Hazardous Substances Act designed to prevent the re-entry of all products that have been refused entry into U.S. ports by authorizing CPSC to require visible markings on all containers transporting refused consumer products. Refused products are currently prohibited from being sold or re-exported to other ports unless revised to address the safety violations. In some cases, manufacturers must destroy products that are refused for entry. Some importers attempt to circumvent the requirements by presenting the same violative product at a different port. According to CPSC, the “refused for entry” marking would enhance CPSC’s ability to identify such products. In 2012, we reported that CPSC faced challenges in collecting and analyzing large quantities of data to identify potential product risks. Some sources it uses to identify injuries or death are dated—for instance, death certificates can be 2 or more years old—or contain limited information about the product involved in the incident. According to one CPSC official, additional resources could enable CPSC to purchase death certificates from a more direct source than it currently does, shortening the time it takes to analyze incidents and identify trends. According to CPSC, the agency has upgraded its data management system to enhance CPSC’s efficiency and effectiveness, enable a more rapid dissemination of information, and allow public access to its searchable database on consumer product safety information. As previously discussed, CPSC also recently requested additional funding in its fiscal year 2015 congressional budget request to expand the RAM. The majority of consumer and industry representatives and commissioners we interviewed agreed that additional funding and staff would better enable CPSC to identify and address consumer safety hazards, and more than half had suggestions to improve CPSC’s efforts. Specifically, because the agency relies heavily upon analyses of scientific and technical data to assess potential hazards from a growing number of consumer products, these representatives said that additional resources to hire staff with expertise in technical areas, such as toxicology, public health, epidemiology, and engineering, could improve the timeliness of CPSC’s response to new or emerging product risks. Several consumer groups and a commissioner we interviewed discussed two other options to improve CPSC’s ability to respond more quickly to new and emerging risks than it currently does, both of which would involve changes to CPSA. Amend Section 6(b) of CPSA. As previously discussed, the confidentiality requirements in section 6(b) of CPSA may prolong the time it takes to notify the public about potentially hazardous products, which could increase the time that hazardous products remain in consumers’ homes. Specifically, two consumer groups, a consumer safety expert, and two commissioners we interviewed commented that changes to section 6(b) of CPSA could improve CPSC’s ability to notify consumers about new and emerging safety hazards. A consumer safety expert we interviewed said that both consumers and manufacturers would benefit from knowing about potential product safety hazards as soon as they are identified. According to a consumer representative, consumers would be able to make informed purchases and manufacturers would learn about potential design defects that may be applicable to their own products. CPSC officials, however, said that releasing information to consumers before manufacturers have had an opportunity to review the details for accuracy could potentially unfairly harm manufacturers and inhibit their right to due process. Establish Timeframes for CPSC Rulemaking Activities. As noted earlier, existing laws do not provide a time frame for standard-setting organizations to complete the development of voluntary standards, which often prolongs the time CPSC takes to promulgate a rule to address a product safety hazard. One commissioner we spoke with suggested giving CPSC authority to set a time limit after which it would promulgate a final rule if industry has not developed a voluntary rule. When asked about the viability of this option, CPSC officials said that it takes time to sort through the complex issues associated with some safety hazards, and that it would be difficult to establish a meaningful time frame for the process. Expansion of international trade, increasingly global supply chains, and technological advances have increased the spectrum of consumer products available to U.S. consumers. These changes have increased the challenges of overseeing and regulating thousands of product types and the potential for new and emerging hazards in the marketplace. Certain aspects of CPSC’s authorities, and other factors such as litigation and limited resources, prolong the time it takes CPSC to respond, potentially increasing the risk that consumers will be harmed by hazardous products. A number of options have been proposed that, individually or in combination, could improve CPSC’s ability to respond to new and emerging risks in a more timely manner. However, these options require making trade-offs, such as balancing sometimes competing consumer and industry interests. For example, changes to expand CPSC’s use of preventive approaches to consumer product safety could give the agency greater ability to respond to risks by preventing hazardous products from entering the market, but also could inhibit market innovation and impose costs on manufacturers. Statutory changes, such as enhanced authority to address unsafe imports, could allow CPSC to address existing hazards in a more timely manner and prevent hazardous products from entering the market, but also could create disadvantages for manufacturers by imposing costs and prolonging the time for some products to come to market. In addition, improving CPSC’s ability to analyze scientific and other data could also help the agency to respond to risks more quickly but may require enhanced resources in a constrained fiscal environment. We provided a draft of this report to CPSC for review and comment. In written comments, CPSC expressed appreciation for the report, but took no position since we made no new recommendations. These comments are reprinted in appendix II. In addition, CPSC provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and the Chairman and Commissioners of CPSC. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In accordance with section 4 of the Consolidated Appropriations Act of 2014, GAO conducted a study of the ability of the Consumer Product Safety Commission (CPSC) to respond quickly to emerging consumer product safety hazards using authorities under the Consumer Product Safety Act (15 U.S.C. §§ 2056-2058), the Federal Hazardous Substances Act (15 U.S.C. § 1262), and the Flammable Fabrics Act (15 U.S.C. § 1193); and report to congressional appropriations committees on an assessment of CPSC’s ability to respond quickly to new and emerging risks. This report discusses (1) how CPSC’s authorities and other factors may affect the time it takes CPSC to respond to new and emerging risks and (2) proposed options that may be available to improve CPSC’s ability to respond to new and emerging risks in a timely manner and trade-offs associated with those options. To address both objectives, we reviewed our prior work on CPSC’s authorities, CPSC standard operating procedures, performance and accountability reports, and agency budget documentation in order to obtain information on the resources currently available to CPSC and how those resources may impact the agency’s ability to respond to new and emerging consumer product safety hazards. In addition to our document review, we interviewed cognizant CPSC officials, knowledgeable staff, and three current and three former CPSC commissioners, including CPSC’s acting Chairman, regarding CPSC’s ability and authority to identify, assess, and address new and emerging risks in a timely To gather perspectives on the sufficiency of CPSC’s current manner. statutory authority and specific factors affecting its ability to respond to emerging risks and to seek opinions on potential options that may be available to CPSC to address these risks in a more timely manner, we interviewed representatives from four consumer advocate groups and representatives from seven industry organizations that represented manufacturers for various consumer products, including juvenile products, clothing and home goods, chemical production, and general consumer goods. We also interviewed six consumer safety experts, three of which were legal experts in the consumer product safety field regarding CPSC’s existing statutory and regulatory authorities for addressing new and emerging risks and other potential options available to CPSC. A new Chairman and commissioner were appointed after we conducted our interviews. To address objective one, we reviewed and analyzed relevant federal laws that authorize CPSC to both promulgate and enforce consumer product safety standards, as well as those that authorize the agency to take corrective action necessary to remove a potentially hazardous product from the consumer market. We then examined CPSC rulemaking procedures as stipulated in relevant sections of the Consumer Product Safety Act, the Federal Hazardous Substances Act, and the Flammable Fabrics Act. We identified additional administrative and statutory requirements that may impede CPSC’s implementation of corrective action, and we reviewed CPSC’s ability to issue mandatory standards and enforce voluntary standards designed to address new and emerging consumer product safety hazards. To address objective two, we conducted a literature review of scholarly articles using Proquest, Nexis.com, and law review databases. Some of the search terms we used to identify articles on options available to respond to new and emerging risks were “consumer safety,” “new and emerging risks,” “precautionary principle,” “premarket model,” and the “Consumer Product Safety Commission” either in combination or alone with geographic delimiters such as “European Union,” or “United States,” and a date boundary of “after 2007”. After removing duplicate articles, we selected 96 scholarly articles and legal reviews from the thousands that were identified based on the extent to which they discussed (1) advantages and disadvantages of the precautionary principle approach or premarket approval or (2) the regulation of relevant policy areas such as consumer product safety, public health, or the environment. Two team members independently reviewed these articles for relevance and found that 18 were relevant for our study. We reviewed these articles more closely for background information on CPSC’s authorities and factors that affect timeliness of responding to new and emerging risks and also to identify trade-offs for any options the article discussed. Similarly, we also searched for additional material on the Internet using search terms such as “United States,” “precautionary principle,” and “premarket approval” and identified an additional 4 articles that we used for contextual purposes. We conducted this performance audit from March 2014 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Debra Johnson (Assistant Director), Tiffani Humble (Analyst in Charge), Thomas Beall, Tarik Carter, Marc Molino, Patricia Moye, Rhonda Rose, Jennifer Schwartz, and Carrie Watkins made key contributions to this report. In addition, JoAnna Paige Berry, Timothy Bober, Christine Broderick, Philip Curtin, Kimberly Gianopoulos, Richard Hung, DuEwa Kamara, Steve Morris, and Michelle Sager also made contributions to the report.
CPSC is responsible for ensuring the safety of thousands of consumer products, including imports, after they enter the U.S. market. Its jurisdiction covers a range of products–from children's toys to off-road recreational vehicles. Identifying and assessing new and emerging consumer product risks can present challenges. Questions have been raised in recent congressional hearings about the length of time CPSC takes to address a safety hazard, during which injuries and fatalities can continue to occur. Section 4 of the Consolidated Appropriations Act of 2014 mandated that GAO review CPSC's ability to respond quickly to new and emerging risks. This report discusses (1) how CPSC's authorities and other factors may affect its response time to new and emerging hazards and (2) options and their trade-offs that may be available to address CPSC's ability to respond to these hazards. GAO reviewed CPSC's laws and regulations, prior GAO reports, and other published studies. Additionally, GAO interviewed CPSC commissioners and staff, consumer safety experts, legal experts, and representatives from consumer and industry organizations. According to Consumer Product Safety Commission (CPSC) officials, industry representatives, consumer groups, and subject-matter experts GAO interviewed, the timeliness of CPSC's responses may be affected by several factors, including (1) compliance actions that can involve litigation, (2) reliance on voluntary standards, (3) rulemaking procedures, (4) restrictions on sharing information with the public and international agencies, and (5) limited agency resources. For example, CPSC must defer to a voluntary standard if it determines that compliance with a voluntary standard would eliminate or adequately reduce the risk of injury and there is likely to be substantial compliance with the voluntary standard. However, because the laws do not establish a time frame for finalizing a voluntary standard, conflicting industry and consumer interests can delay its development, sometimes for years. CPSC has worked with the window covering industry since 1994 to develop a voluntary standard to address strangulation hazards stemming from window blind cords, but as of September 2014, no voluntary standard that addresses the ongoing safety concerns had been finalized. Further, new and emerging product safety risks present challenges because, statutorily, CPSC was established to respond to risks after products have been introduced into market. Various options have been suggested for improving CPSC's ability to respond to new and emerging product safety risks, including the following examples: Preventative regulatory approaches . Many representatives said that regulatory approaches designed to prevent hazardous products from entering the market—such as premarket approval—could reduce consumer injuries, but could also inhibit market innovation and impose burdensome costs on manufacturers and CPSC. Expedited rulemaking authority . Some stakeholders proposed expanding CPSC's authority to use expedited rulemaking procedures similar to those authorized in 2008 in the Consumer Product Safety Improvement Act, which streamlined the rulemaking process for durable infant products. Most believed streamlined procedures would enable CPSC to promulgate rules in a more timely manner to address risks, but opinions differed on the extent to which the authority should be expanded. Enhancing CPSC's authorities to address unsafe imports . CPSC has proposed several statutory changes to improve its ability to identify hazardous products at the ports of entry and prevent them from entering the marketplace. About half the representatives GAO talked to supported the proposed changes, with some exceptions where the changes would impose additional burdens on industry. Enhanced data analysis capabilities . Most representatives agreed that CPSC could respond to new and emerging hazards more quickly if it had additional funding for technology and staff with technical expertise in the areas of engineering, toxicology, and public health to analyze product hazard data and conduct risk assessments. GAO makes no recommendations in this report. In prior reports GAO has made a recommendation related to CPSC's participation in voluntary standards development and suggested that Congress address restrictions on how CPSC is able to share information with its international counterparts.
SSA is the largest operating division within HHS. As such, it accounts for approximately 65,000 FTE employees or about 51 percent of HHS’ FTE positions. SSA’s fiscal year 1995 budget of about $371 billion accounts for over one-half of HHS’ total budget for that year. SSA administers three major federal programs and assists other federal and state agencies in administering a variety of other income security programs for individuals and families. The programs administered by SSA are the OASI program and the DI program—two social insurance programs authorized under Title II of the Social Security Act. SSA also administers the SSI program, a welfare program authorized under Title XVI, to provide benefits to the needy aged, blind, and disabled. SSA serves the public through a network that includes 1,300 field offices and a national toll-free telephone number. Under the Title II programs, over $300 billion in benefits were paid in 1993 to over 40 million eligible beneficiaries. About 95 percent of all jobs in the United States are covered by these insurance programs. SSA also performs a number of administrative functions to pay Social Security benefits. For example, it maintains earnings records for over 140 million U.S. workers, which are used to determine the dollar amount of their OASI and DI benefits. To do this, SSA collects annual wage reports from over 6 million employers. Since 1990, it has issued new Social Security numbers to an average of over 7 million people annually. Under Title XVI, SSA provides almost $22 billion in SSI benefits annually to about 6 million recipients. This program was established to provide cash assistance to low-income individuals who are age 65 or older or are blind or disabled. In the mid-1970s, SSI replaced the categorical programs for the needy aged, blind, and disabled that were administered by the states. SSA began as an independent agency with a mission of providing retirement benefits to the elderly. A three-member, independent Social Security Board was established in 1935 to administer the Social Security program. The Chairman of the Board reported directly to the President until July 1939, when the Board was placed under the newly established Federal Security Agency (FSA). At that time, the Social Security program was expanded to include Survivors Insurance, which paid monthly benefits to survivors of insured workers. In 1946, the Social Security Board was abolished, and its functions were transferred to the newly established Social Security Administration, still within FSA. The FSA administrator established the position of Commissioner to head SSA. In 1953, the FSA was abolished and its functions were transferred to the Department of Health, Education and Welfare (HEW). Moreover, the position of SSA Commissioner was designated as a presidential appointee requiring Senate confirmation. In 1956, the Social Security program was expanded to include the DI program to provide benefits to covered workers who became unable to work because of disability. In 1965, amendments to the Social Security Act increased SSA’s scope and complexity by establishing the health insurance program known as Medicare. The purpose of Medicare was to help the qualified elderly and disabled pay their medical expenses. SSA administered the Medicare program for about 12 years before Medicare was transferred to a new division within HEW, the Health Care Financing Administration. Further amendments to the Social Security Act created the SSI program, effective in 1974. This program was designed to replace welfare programs for the aged, blind, and disabled administered by the states. The SSI program added substantially to SSA’s responsibilities. The agency then had to deal directly with SSI clients, which entailed determining recipients’ eligibility based on income and assets. SSA has remained a part of HHS (formerly HEW) since 1953. Since 1984, congressional committees responsible for overseeing SSA’s activities have considered initiatives to make SSA an independent agency. While the reasons for independence have varied over the years, legislation seeking independence from HHS has been introduced in several sessions of the Congress. Concerns expressed in congressional hearings and reports of the past decade have focused on a variety of issues, including the need for (1) improved management and continuity of leadership at SSA; (2) greater public confidence about the long-term viability of Social Security benefits; and (3) removal of the program’s policies and budgets from the influence of HHS, OMB, and the administration. Statements by committee chairmen have shown a desire to make SSA more accountable to the public for its actions and more responsive to the Congress’ attempts to address SSA’s management and policy concerns. The act requires the Secretary of HHS and the Commissioner of SSA to develop a written interagency transfer agreement, effective March 31, 1995, which specifies the personnel and other resources to be transferred to an independent SSA. Our review of the agreement and the supporting documentation shows that SSA and HHS have developed a reasonable methodology for, and progressed well toward implementing the transition. Specifically, we found that HHS and SSA have progressed well in (1) identifying and transferring personnel and other resources; (2) effecting organizational changes prompted by the transition; and (3) addressing changes to SSA’s budget process, as called for in the act. The interagency agreement submitted by HHS and SSA to the Senate Committee on Finance and House Committee on Ways and Means on December 27, 1994, notes that all major transition tasks have been completed or are under way and that personnel and resource transfers will be completed on March 31, 1995. Elements of the agreement relating to transferring personnel, resources, and property appear to meet OMB guidelines for such transfers. Under the interagency agreement, the approximately 65,000 FTE employees currently under SSA will remain with the agency. In addition, about 1,143 HHS FTE personnel who provide support services to SSA are expected to be transferred. Of these, 478 will provide personnel administration services for SSA, 289 will provide legal support, and 263 will perform audit and investigative activities. The remaining 113 FTEs will provide other administrative support services for the agency. SSA expects to reimburse HHS for providing payroll and certain other support services to SSA on an interim basis. Similarly, HHS will reimburse SSA for providing certain services to Medicare recipients through its local offices and telephone service centers. In developing the agreement, HHS surveyed its division heads to identify the functions and FTEs currently supporting SSA. At the same time, SSA developed its own estimate of the number of HHS personnel performing work for it. To supplement these data, SSA also relied on its managers’ assessments of the number of HHS FTEs currently supporting the agency. HHS and SSA then engaged in extensive negotiations to agree on the final number of FTEs to be transferred. Virtually all individuals have been identified, and HHS expects to issue final staff transfer notices by February 21, 1995. Personnel have been selected primarily on the basis of the percentage of their work spent on SSA activities. However, all employees had an opportunity to appeal the transfer actions and some volunteers were sought to obtain the proper skill levels. HHS and SSA have also agreed on nonpersonnel resources to be transferred, such as funds, computer equipment, and furniture. These decisions are contingent on the numbers and specific personnel transferring to SSA. HHS and SSA are required to prepare for OMB a written itemization of resources to be transferred. OMB officials told us that no problems have arisen and it expects to provide the certification necessary to complete resource transfers on March 31, 1995. SSA will make several organizational changes to be a fully functional independent agency. (See app. I for the SSA organization charts for before and after the transition.) We believe that SSA has reasonably planned for these changes. Our assessment of the transition activities, combined with the review of the SSA and HHS interagency agreement, indicates that the organizational changes should be completed on March 31, 1995. SSA plans to establish its own Office of the General Counsel and an Office of Inspector General. SSA’s Office of the General Counsel will provide the necessary legal advice and litigation services for the programs administered by SSA. An acting General Counsel will be designated to head the office until a permanent appointment is made. The SSA Office of Inspector General will conduct audits and investigations of the agency’s programs and operations. The Inspector General will report directly to the Commissioner to ensure objectivity and independence from internal agency pressures. The HHS Inspector General has agreed to act as SSA’s Inspector General until a permanent Inspector General has been confirmed. SSA expects that additional organizational changes will occur in conjunction with the transition. For example, SSA plans to merge the functions of its Offices of Programs and Policy and External Affairs. The new Office of Programs, Policy, Evaluation and Communications will be responsible for research, policy analysis, and program evaluation. SSA’s Office of Legislation and Congressional Affairs will also be repositioned to report directly to the Commissioner. This office will facilitate a working relationship between SSA and the executive and legislative branches. SSA plans to establish a Washington, D.C., office to facilitate a closer working relationship with the Congress and the executive branch. Staffing in the Washington office is estimated at 150 to 200 permanent employees, including the Commissioner, Principal Deputy Commissioner, legislative liaison staff, Inspector General, the General Counsel, and research and statistics personnel. The agency has defined needed space requirements and acquired temporary office space. It should obtain permanent space by early 1996. SSA also plans to decentralize and transfer more management authority from its headquarters to its regional offices. Following the transition, Regional Commissioners will have direct authority over public affairs and personnel administration in their respective regions. These functions are currently managed by SSA’s headquarters or by HHS. SSA has also indicated that, where possible, some of the approximately 1,143 FTEs identified for transfer may be shifted to local offices and telephone service centers to strengthen service. Finally, SSA has confirmed that the newly established Social Security Advisory Board will spend a substantial amount of time in Washington, D.C., and members will maintain offices in both the Baltimore and Washington, D.C., locations. The seven-member board will advise the Commissioner, the President, and the Congress on SSA program policies. SSA officials told us that the Congress has appointed four members. However, the President has not yet appointed the three remaining members as required by the act. The act revises the process for submitting SSA’s annual budget. The act states, “the Commissioner shall prepare an annual budget for the Administration which shall be submitted by the President to the Congress without revision, together with the President’s annual budget for the Administration.” Traditionally, agencies, including SSA, receive budget guidance from OMB beginning in April of each year and spend about 5 months preparing a budget proposal. This proposal is submitted in September to OMB, where it is reviewed for several months. OMB then requires agencies to revise their budget proposals by incorporating OMB decisions and changes. Once approved by OMB, agency budgets are transmitted to the Congress as part of the President’s budget for executive agencies. The act does not restrict OMB from continuing to exercise its traditional budgetary oversight role, and our work has shown that both OMB and SSA officials do not envision any substantive change in SSA’s budget process. Presumably, the new budget provision is intended to illuminate differences between the budget SSA proposes and the President’s budget for the agency. However, the process allows for OMB’s April guidance to influence SSA’s September budget proposal. In its comments on this report, SSA agreed that OMB’s influence would continue to be a factor in the preparation of its September budget. While SSA has progressed well toward completing the transition, the agency will continue to face significant challenges as an independent agency. Some of these include the long-range solvency of the Social Security trust funds, growing disability caseloads, and issues surrounding the increase in SSI caseloads. We have identified and documented these challenges in numerous reports, testimonies, and management reviews of SSA over the last several years. With the passage of legislation creating an independent SSA, it was expected that SSA would take a more active leadership role in addressing its major program challenges. Our work has also demonstrated the need for SSA to address program policy issues and to more aggressively manage its programs. This will be crucial for SSA as it assumes the functions currently provided by HHS. SSA’s independence will heighten the need for it to work with the Congress in developing options for ensuring that revenues are adequate to make future Social Security benefit payments. As noted in our previous reports, this issue has troubled the agency for many years. The financial operations of SSA’s insurance programs are supported by trust funds, which are credited with revenues derived from (1) payroll taxes on earned income and on self-employment earnings up to specified limits and (2) interest income from trust fund investments. Additional financing is provided from general revenues resulting from the taxation of Social Security benefits. To address financing issues, the Social Security Amendments of 1977 and 1983 moved the trust funds from a pay-as-you-go financing basis toward the accumulation of substantial temporary reserves. However, as we reported in 1989 and 1994, economic and demographic factors have slowed the growth of the trust fund reserves and brought the projected point of insolvency for both the OASI and DI trust funds closer than originally expected. SSA’s Office of the Actuary confirmed that the OASI trust fund currently has reserves sufficient to pay annual benefits until the year 2030. The DI trust fund will have funds sufficient to pay annual benefits until the year 2015. In recent years, we have reported that SSA’s DI program has experienced significant caseload increases, and backlogs have remained at unprecedented levels. Moreover, changes in the characteristics of new beneficiaries have accompanied this growth. The new beneficiaries’ average age is generally decreasing and is now below 50. Also, mental impairment awards to younger workers increased by about 500 percent between 1982 and 1992, helping to lower the average age. These situations could mean that once on the rolls, these beneficiaries will receive benefits for a longer period of time than other beneficiaries. In addition, an increasing percentage of new beneficiaries receives very low benefits, which indicates that these beneficiaries had limited work histories and are unlikely to return to work. Program rolls have grown and changed for several reasons. Higher unemployment probably contributes to increasing applications, and policy changes have contributed to changes in the numbers and types of beneficiaries. However, SSA lacks adequate data on how many people in the population suffer from disabilities that might qualify them for benefits. As a result, SSA has limited ability to predict future growth and change in the rolls. SSA has undertaken initiatives to improve its disability application process to more efficiently handle caseloads and reduce backlogs. Implementing these initiatives will significantly challenge SSA because it requires fundamental changes in the way the agency does its work. Further, without additional information, neither SSA nor the Congress can be sure whether the current growth will continue. SSA faces the challenge of determining what actions are needed to better manage the program and whether some fundamentals of the program should be reexamined. As we reported in previous work, SSI benefit payments and caseloads have increased significantly over the past several years. From 1986 to 1994, SSI benefit payments for the aged, blind, and disabled increased by $13.5 billion, doubling in 7 years. Benefits for the disabled accounted for almost 100 percent of this increase. Three groups—disabled children, mentally disabled adults, and legal immigrants—significantly outpaced the growth of all other SSI recipients. As an independent agency, SSA faces the challenge of addressing congressional and public concerns about SSI program growth. HHS and SSA have developed an acceptable methodology for identifying the functions, personnel, and other resources to be transferred to the independent agency. They have also progressed well toward completing the initiatives necessary for SSA to be a fully functional independent agency on the effective date. However, independence alone will not resolve the problems identified in previous GAO reviews, and SSA will continue to face significant challenges beyond March 31, 1995. The elevation of SSA to an independent agency will create opportunities for the agency to take a leadership role in addressing some of the broader program policy issues and to reexamine its processes to determine how it can improve its effectiveness. We obtained official oral comments on this report from senior officials from SSA and HHS. These officials generally agreed with our findings and conclusions. They did offer some technical suggestions that we have incorporated where appropriate in the report. We are sending copies of this report to the Secretary of HHS, the Commissioner of SSA, and other interested parties. Copies will also be made available to others upon request. If you or your staffs have any questions concerning this report, please call me on (202) 512-7215. Other major contributors are listed in appendix II. Department of Health and Human Services Social Security Administration (as of February 1995) (draft as of February 1995) In addition to those named above, the following individuals made important contributions to this report: Leslie Aronovitz, Associate Director, Income Security Issues; Daniel Bertoni, Senior Evaluator; Mary Reich, Staff Attorney; Valerie Rogers, Evaluator; and Jacquelyn Stewart, Senior Evaluator. Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). Social Security: Trust Funds Can Be More Accurately Funded (GAO/HEHS-94-48, Sept. 2, 1994). Social Security: New Continuing Disability Review Process Could Be Enhanced (GAO/HEHS-94-118, June 27, 1994). Social Security: Major Changes Needed for Disability Benefits for Addicts (GAO/HEHS-94-128, May 13, 1994). Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive (GAO/HEHS-94-34, Feb. 8, 1994). Social Security: Increasing Number of Disability Claims and Deteriorating Service (GAO/HRD-94-11, Nov. 10, 1993). Social Security: Sustained Effort Needed to Improve Management and Prepare for the Future (GAO/HRD-94-22, Oct. 27, 1993). Social Security: Telephone Busy Signal Rates at Local SSA Field Offices (GAO/HRD-93-49, Mar. 4, 1993). Social Security: Reporting and Processing of Death Information Should Be Improved (GAO/HRD-92-88, Sept. 4, 1992). Debt Management: More Aggressive Actions Needed to Reduce Billions in Overpayments (GAO/HRD-91-46, July 9, 1991). Social Security Downsizing: Significant Savings But Some Service Quality and Operational Problems (GAO/HRD-91-63, Mar. 19, 1991). Social Security: Status and Evaluation of Agency Management Improvement Initiatives (GAO/HRD-89-42, July 24, 1989). Social Security: Staff Reductions and Service Quality (GAO/HRD-89-106BR, June 16, 1989). Social Security Administration: Stable Leadership and Better Management Needed to Improve Effectiveness (GAO/HRD-87-39, Mar. 18, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO: (1) evaluated the Social Security Administration's (SSA) and Department of Health and Human Services' (HHS) transition plans; and (2) identified some of the policy changes SSA will face as an independent agency. GAO found that: (1) SSA and HHS have progressed towards the goal of SSA functioning as an independent agency; (2) HHS has successfully identified and transferred personnel and other resources to SSA; (3) there has been effective organizational changes prompted by the transition; (4) SSA and HHS have made changes to the SSA budget process, and SSA has initiated an effort to improve its claims processing function; (5) SSA and HHS have agreed that nonpersonnel transfers, such as funds, computer equipment, and furniture will be dependent on personnel transfers to SSA; (6) SSA will maintain its own legal and auditing departments; and (7) SSA will establish a Washington, DC office in order to bring about a closer working relationship with Congress and the executive branch.
According to IAEA, between 1993 and 2006, there were 1,080 confirmed incidents of illicit trafficking and unauthorized activities involving nuclear and radiological materials worldwide. Eighteen of these cases involved weapons-usable material—plutonium and highly enriched uranium (HEU)—that could be used to produce a nuclear weapon. IAEA also reported that 124 cases involved materials that could be used to produce a device that uses conventional explosives with radioactive material (known as a “dirty bomb”). Past confirmed incidents of illicit trafficking in HEU and plutonium involved seizures of kilogram quantities of weapons-usable nuclear material but most have involved very small quantities. In some of these cases, it is possible that the seized material was a sample of larger quantities available for illegal purchase. IAEA concluded that these materials pose a continuous potential security threat to the international community, including the United States. Nuclear material could be smuggled into the United States in a variety of ways: hidden in a car, train or ship; sent through the mail; carried in personal luggage through an airport; or walked across an unprotected border. In response to these threats, U.S. agencies, including DHS, DOD, DOE, and State, implemented programs to combat nuclear smuggling in foreign countries and the United States. DOD, DOE, and State fund, manage, and implement the global nuclear detection architecture’s international programs. Many international detection programs were operating for several years before DNDO was created. For example, DOE’s Materials Protection, Control, and Accounting program, initiated in 1995, provides support to the Russian Federation and other countries of concern to secure nuclear weapons and weapons material that may be at risk of theft of diversion. In addition, during the 1990s, the United States began deploying radiation detection equipment at borders in countries of the former Soviet Union. DOD’s Cooperative Threat Reduction (CTR) program launched a variety of programs in the early 1990s to help address proliferation concerns in the former Soviet Union, including helping secure Russian nuclear weapons. Two other DOD programs have provided radiation portal monitors, handheld equipment, and radiation detection training to countries in the former Soviet Union and in Eastern Europe. Similarly, State programs have provided detection equipment and training to numerous countries. DHS, in conjunction with other federal, state, and local agencies, is responsible for combating nuclear smuggling in the United States and has provided radiation detection equipment, including portal monitors, personal radiation detectors (known as pagers), and radioactive isotope identifiers at U.S. ports of entry. All radiation detection devices have limitations in their ability to detect and identify nuclear material. Detecting attempted nuclear smuggling is difficult because there are many sources of radiation that are legal and not harmful when used as intended. These materials can trigger alarms— known as nuisance alarms—that may be indistinguishable in some cases from alarms that could sound in the event of a true case of nuclear smuggling. Nuisance alarms can be caused by patients who have recently had cancer treatments; a wide range of cargo with naturally occurring radiation (e.g., fertilizer, ceramics, and food products) and legitimate shipments of radiological sources for use in medicine and industry. In October 2005, a few months after its inception, DNDO completed its initial inventory of federal programs associated with detecting the illicit transport of radiological and nuclear materials. As part of this effort, DNDO defined the architecture’s general approach: a multilayered detection framework of radiation detection equipment and interdiction activities to combat nuclear smuggling in foreign countries, at the U.S. border, and inside the United States. DNDO, in collaboration with other federal agencies, such as DOD, DOE, and State, analyzed the gaps in current planning and deployment strategies to determine the ability of individual layers of the architecture to successfully prevent illicit movement of radiological or nuclear materials or devices. DNDO identified several gap areas with respect to detecting potential nuclear smuggling, such as (1) land border crossings into the United States between formal points of entry, (2) small maritime craft (any vessel less than 300 gross tons) that enter the United States, and (3) international general aviation. In November 2006, DNDO completed a more detailed analysis of programs in the initial architecture. DNDO identified 72 programs across the federal government that focused on combating radiological and nuclear smuggling and nuclear security and it discussed these programs in depth by layer. The analysis also included a discussion of the current and anticipated budgets associated with each of these programs and each of the layers. In June 2008, DNDO released the Joint Annual Interagency Review of the Global Nuclear Detection Architecture. This report provides an updated analysis of the architecture by layer of defense and a discussion of the 74 programs now associated with each of the layers, as well as an estimate of the total budgets by layer. To address the gaps identified in the domestic portions of the architecture, DNDO has initiated pilot programs to address primary areas of concern or potential vulnerability. For example: For the land border in between ports of entry, DNDO and CBP are studying the feasibility of equipping CBP border patrol agents with portable radiological and nuclear detection equipment along the U.S. border. For small marine vessels, DNDO is working with the Coast Guard to develop and expand the coverage of radiological and nuclear detection capabilities that can be specifically applied in a maritime environment. For international general aviation, DNDO is working with CBP, the Transportation Security Administration, and other agencies to develop and implement radiological and nuclear detection capabilities to scan international general aviation flights to the United States for possible illicit radiological or nuclear materials. To date, we have received briefings on each of these programs from DNDO, but we have not yet fully reviewed how they are being implemented. We will examine each of these more closely during the course of our review. Our preliminary observation is that DNDO’s pilot programs appear to be a step in the right direction for improving the current architecture. However, these efforts to address gaps are not being undertaken within the larger context of an overarching strategic plan. While each agency that has a role in the architecture may have its own planning documents, DNDO has not produced an overarching strategic plan that can guide its efforts to address the gaps and move to a more comprehensive global nuclear detection architecture. Our past work has discussed the importance of strategic planning. Specifically, we have reported that strategic plans should clearly define objectives to be accomplished, identify the roles and responsibilities for meeting each objective, ensure that the funding necessary to achieve the objectives is available, and employ monitoring mechanisms to determine progress and identify needed improvements. For example, such a plan would define how DNDO will achieve and monitor the goal of detecting the movement of radiological and nuclear materials through potential smuggling routes, such as small maritime craft or land borders in between ports of entry. Moreover, this plan would include agreed-upon processes and procedures to guide the improvement of the architecture and coordinate the activities of the participating agencies. DNDO and other agencies face a number of challenges in developing a global nuclear detection architecture, including (1) coordinating detection efforts across federal, state, and local agencies and with other nations, (2) dealing with the limitations of detection technology, and (3) managing the implementation of the architecture. Our past work on key aspects of international and domestic programs that are part of the architecture have identified a number of weaknesses. In order for the architecture to be effective, all parts need to be well thought out, managed, and coordinated. As a chain is only as strong as its weakest link, limitations in any of the programs that constitute the architecture may ultimately limit its effectiveness. Specifically, in past work, we have identified the following difficulties that federal agencies have had coordinating and implementing radiation detection efforts. We reported that DOD, DOE, and State had not coordinated their approaches to enhance other countries’ border crossing. Specifically, radiation portal monitors that State installed in more than 20 countries are less sophisticated than those DOD and DOE installed. As a result, some border crossings where U.S. agencies had installed radiation detection equipment were more vulnerable to nuclear smuggling than others. Since issuing our report, a governmentwide plan encompassing U.S. efforts to combat nuclear smuggling in other countries has been developed; duplicative programs have been consolidated; and coordination among the agencies, although still a concern, has improved. In 2005, we reported that there is no governmentwide guidance for border security programs that delineates agencies’ roles and responsibilities, establishes regular information sharing, and defines procedures for resolving interagency disputes. In the absence of guidance for coordination, officials in some agencies questioned other agencies’ roles and responsibilities. More recently, in 2008, we found that levels of collaboration between U.S. and host government officials varied at some seaports participating in DHS’s Container Security Initiative (CSI). In addition, we identified hurdles to cooperation between CSI teams and their counterparts in the host government, such as a host country’s legal restrictions that CBP officials said prevent CSI teams from observing examinations. Furthermore, many international nuclear detection programs rely heavily on the host country to maintain and operate the equipment. We have reported that in some instances this reliance has been problematic. For example: About half of the portal monitors provided to one country in the former Soviet Union were never installed or were not operational. In additional, mobile vans equipped with radiation detection equipment furnished by State have limited usefulness because they cannot operate effectively in cold climates or are otherwise not suitable for conditions in some countries. Once the equipment is deployed, the United States has limited control over it, as we have previously reported. Specifically, once DOE finishes installing radiation equipment at a port and passes control of the equipment to the host government, the United States no longer controls the equipment’s specific settings or its use by foreign customs officials. Settings can be changed, which may decreased the probability that the equipment will detect nuclear material. Within the U.S. borders, DNDO faces coordination challenges and will need to ensure that the problems with nuclear detection programs overseas are not repeated domestically. Many pilot programs DNDO is developing to address gaps in the architecture will rely heavily on other agencies to implement them. For example, DNDO is working closely with the Coast Guard and other federal agencies to implement DNDO’s maritime initiatives to enhance detection of radiological and nuclear materials on small vessels. However, maritime jurisdictional responsibilities and activities are shared among federal, state, regional, and local governments. As a result, DNDO will need to closely coordinate activities related to detecting radiological and nuclear materials with these entities, as well as ensure that users are adequately trained and technical support is available. DNDO officials told us they are closely coordinating with other agencies, and our work to assess this coordination is still underway. We will continue to explore these coordination activities and challenges as we continue our review. The ability to detect radiological and nuclear materials is a critical component of the global nuclear detection architecture; however, current technology may not able to detect and identify all smuggled radiological and nuclear materials. In our past work, we found limitations with radiation detection equipment. For example: In a report on preventing nuclear smuggling, we found that a cargo container containing a radioactive source was not detected as it passed through radiation detection equipment that DOE had installed at a foreign seaport because the radiation emitted from the container was shielded by a large amount of scrap metal. Additionally, detecting actual cases of illicit trafficking in weapons-usable nuclear material is complicated: one of the materials of greatest concern in terms of proliferation—highly enriched uranium—is among the most difficult materials to detect because of its relatively low level of radioactivity. We reported that current portal monitors deployed at U.S. borders can detect the presence of radiation but cannot distinguish between harmless radiological materials, such as ceramic tiles, fertilizer, and bananas, and dangerous nuclear materials, such as plutonium and uranium. DNDO is currently testing a new generation of portal monitors. We have raised continuing concerns about DNDO’s efforts to develop and test these advanced portal monitors. We currently have additional work underway examining the current round of testing and expect to report on our findings in September 2008. Environmental conditions can affect radiation detection equipment’s performance and sustainability, as we also have previously reported. For example, wind disturbances can vibrate the equipment and interfere with its ability to detect radiation. In addition, sea spray may corrode radiation detection equipment and its components that are operated in ports or near water. Its corrosive nature, combined with other conditions such as coral in the water, can accelerate the degradation of equipment. It is important to note that radiation detection equipment is only one of the tools that customs inspectors and border guards must use to combat nuclear smuggling. Combating nuclear smuggling requires an integrated approach that includes equipment, proper training, and intelligence gathering on smuggling operations. In the past, most known interdictions of weapons-useable nuclear materials have resulted from police investigations rather than by radiation detection equipment installed at border crossings. The task DNDO has been given—developing an architecture to keep radiological and nuclear materials from entering the country—is a complex and large undertaking. DNDO has been charged with developing an architecture that depends on programs implemented by other agencies. This lack of control over these programs poses a challenge for DNDO in ensuring that all individual programs within the global nuclear detection architecture will be effectively integrated. Moreover, implementing and sustaining the architecture requires adequate resources and capabilities to meet needed commitments. However, the majority of the employees in DNDO’s architecture office are detailees on rotation from other federal agencies or are contractors. This type of staffing approach allows DNDO to tap into other agencies’ expertise in radiological and nuclear detection. However, officials told us that staff turnover may limit the retention and depth of institutional memory since detailees return to their home organizations after a relatively short time. In some cases, there have been delays in filling these vacancies. We will continue to examine this potential resource challenge as we complete our work. In spite of these challenges, DNDO’s efforts to develop a global nuclear detection architecture have yielded some benefits, according to DOD, DOE, and State officials. For example, an official from the State Department told us that DNDO is working through State’s Global Initiative to Combat Nuclear Terrorism to develop model guidelines that other nations can use to establish their own nuclear detection architectures and recently sponsored a related workshop. In addition, DOE officials said that DNDO’s actions have helped broaden their perspective on the deployment of radiation detection equipment overseas. Previously, the U.S. government had been more focused on placing fixed detectors at particular sites, but as a result of DNDO’s efforts to identify gaps in the global detection network, DOE has begun to work with law enforcement officials in other countries to improve detection capabilities for the land in between ports of entry. Finally, DNDO, DOD, DOE, and the Office of the Director of National Intelligence for Science and Technology are now formally collaborating on nuclear detection research and development and they have signed a memorandum of understanding (MOU) to guide these efforts. The MOU will integrate research and development programs by, for example, providing open access to research findings in order to leverage this knowledge and to reduce conflict between different agency programs. In addition, the MOU encourages joint funding of programs and projects and calls on the agencies to coordinate their research and development plans. In our ongoing work, we will examine DNDO’s progress in carrying through on these initiatives. DNDO reported that approximately $2.8 billion was budgeted in fiscal year 2007 for 74 programs focused on preventing and detecting the illicit transport of radiological or nuclear materials. These programs were primarily administered by DHS, DOD, DOE, and State and spanned all layers of the global nuclear detection architecture. Specifically: $1.1 billion funded 28 programs focused on the international aspects of the $221 million funded 9 programs to support detection of radiological and nuclear material at the U.S. border; $918 million funded 16 programs dedicated to detecting and securing radiological or nuclear materials within the U.S. borders; and $577 million funded 34 cross-cutting programs that support many different layers of the architecture by, for example, research and development or technical support to users of the detection equipment. The fiscal year 2007 budget of $2.8 billion will not sustain the architecture over the long term because additional programs and equipment will be implemented to address the gaps. For example, this amount does not include the cost estimates related to acquiring and deploying the next generation of advanced portal monitors that are currently being tested. In addition, DNDO is just beginning new efforts to mitigate gaps in the architecture and budget estimates for these activities are limited. We are in the process of reviewing this cost information and will provide more detailed analysis in our final report. DNDO has been given an important and complex task—develop a global nuclear detection architecture to combat nuclear smuggling and keep radiological and nuclear weapons or materials from entering the United States. This undertaking involves coordinating a vast array of programs and technological resources that are managed by many different agencies and span the globe. Since its creation 3 years ago, DNDO has conceptually mapped the current architecture and identified how it would like the architecture to evolve in the near term. While DNDO’s vision of a more comprehensive architecture is laudable, to achieve this goal, it will need to address a number of key challenges including building close coordination and cooperation among the various agencies involved and developing and deploying more advanced radiation detection technology. Although DNDO has taken some steps to achieve these ends, it has not done so within the larger context of an overarching strategic plan with clearly established goals, responsibilities, priorities, resource needs, and mechanisms for assessing progress along the way. Developing and implementing a global nuclear detection architecture will likely take several years, cost billions of dollars, and rely on the expertise and resources of agencies and programs across the government. Moving forward, DNDO should work closely with its counterparts within DHS, as well as at other departments, to develop a comprehensive strategic plan that helps safeguard the investments made to date, more closely links future goals with the resources necessary to achieve those goals, and enhance the architecture’s ability to operate in a more cohesive and integrated fashion. We recommend that the Secretary of Homeland Security, in coordination with the Secretary of Defense, the Secretary of Energy, and the Secretary of State, develop a strategic plan to guide the development of a more comprehensive global nuclear detection architecture. Such a plan should (1) clearly define objectives to be accomplished, (2) identify the roles and responsibilities for meeting each objective, (3) identify the funding necessary to achieve those objectives, and (4) employ monitoring mechanisms to determine programmatic progress and identify needed improvements. We provided a draft of the information in this testimony to DNDO. DNDO provided oral comments on the draft, concurred with our recommendations, and provided technical comments, which we incorporated as appropriate. Mr. Chairman, this concludes my prepared statement. We will continue our review and plan to issue a report in early 2009. I would be pleased to answer any questions that you or other Members of the Committee have at this time. For further information on this testimony, please contact me at (202) 512- 3841 or maurerd@gao.gov. Glen Levis, Assistant Director, Elizabeth Erdmann, Rachel Girshick, Sandra Kerr, and Tommy Williams made key contributions to this statement. Additional assistance was provided by Omari Norman and Carol Herrnstadt Shulman. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In April 2005, a Presidential Directive established the Domestic Nuclear Detection Office (DNDO) within the Department of Homeland Security to enhance and coordinate federal, state, and local efforts to combat nuclear smuggling domestically and abroad. DNDO was directed to develop, in coordination with the departments of Defense (DOD), Energy (DOE), and State (State), an enhanced global nuclear detection architecture--an integrated system of radiation detection equipment and interdiction activities. DNDO implements the domestic portion of the architecture, while DOD, DOE, and State are responsible for related programs outside the U.S. This testimony provides preliminary observations based on ongoing work addressing (1) the status of DNDO's efforts to develop a global nuclear detection architecture, (2) the challenges DNDO and other federal agencies face in implementing the architecture, and (3) the costs of the programs that constitute the architecture. This statement draws on prior GAO reviews of programs constituting the architecture, and GAO's work on strategic planning According to GAO's preliminary work to date, DNDO has taken steps to develop a global nuclear detection architecture but lacks an overarching strategic plan to help guide how it will achieve a more comprehensive architecture. Specifically, DNDO has developed an initial architecture after coordinating with DOD, DOE, and State to identify 74 federal programs that combat smuggling of nuclear or radiological material. DNDO has also identified gaps in the architecture, such as land border crossings into the United States between formal points of entry, small maritime vessels, and international general aviation. Although DNDO has started to develop programs to address these gaps, it has not yet developed an overarching strategic plan to guide its transition from the initial architecture to a more comprehensive architecture. For example, such a plan would define across the entire architecture how DNDO would achieve and monitor its goal of detecting the movement of radiological and nuclear materials through potential smuggling routes, such as small maritime craft or land borders in between points of entry. The plan would also define the steps and resources needed to achieve a more comprehensive architecture and provide metrics for measuring progress toward goals. DNDO and other federal agencies face a number of coordination, technological, and management challenges. First, prior GAO reports have demonstrated that U.S.-funded radiological detection programs overseas have proven problematic to implement and sustain and have not been effectively coordinated, although there have been some improvements in this area. Second, detection technology has limitations and cannot detect and identify all radiological and nuclear materials. For example, smugglers may be able to effectively mask or shield radiological materials so that it evades detection. Third, DNDO faces challenges in managing implementation of the architecture. DNDO has been charged with developing an architecture that depends on programs implemented by other agencies. This responsibility poses a challenge for DNDO in ensuring that the individual programs within the global architecture are effectively integrated and coordinated to maximize the detection and interdiction of radiological or nuclear material. According to DNDO, approximately $2.8 billion was budgeted in fiscal year 2007 for the 74 programs included in the global nuclear detection architecture. Of this $2.8 billion, $1.1 billion was budgeted for programs to combat nuclear smuggling internationally; $220 million was devoted to programs to support the detection of radiological and nuclear material at the U.S. border; $900 million funded security and detection activities within the United States; and approximately $575 million was used to fund a number of cross-cutting activities. The future costs for DNDO and other federal agencies to address the gaps identified in the initial architecture are not yet known or included in these amounts.
DOD Instruction 5100.73, Major DOD Headquarters Activities, defines major headquarters activities as those headquarters (and the direct support integral to their operation) whose primary mission is to manage or command the programs and operations of DOD, its components, and their major military units, organizations, or agencies. The instruction provides an official list of the organizations that it covers, including OSD; the Joint Staff; the Offices of the Secretary of the Army and Army Staff; the Office of the Secretary of the Navy and Office of the Chief of Naval Operations; Headquarters, Marine Corps; and the Offices of the Secretary of the Air Force and Air Staff. These organizations have responsibilities that include developing guidance, reviewing performance, allocating resources, and conducting mid-to-long-range budgeting as they oversee, direct, and control subordinate organizations or units. In addition to OSD, the Joint Staff, and the secretariats and staffs of the military services, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. OSD is responsible for assisting the Secretary of Defense in carrying out his or her duties and responsibilities for the management of DOD.These include policy development, planning, resource management, and fiscal and program evaluation responsibilities. The staff of OSD comprises military and civilian personnel and contracted services. While military personnel may be assigned to permanent duty in OSD, the Secretary may not establish a military staff organization within OSD. The Joint Staff is responsible for assisting the Chairman of the Joint Chiefs of Staff, the military advisor to the President, in accomplishing his responsibilities for the unified strategic direction of the combatant forces; their operation under unified command; and their integration into a team of land, naval, and air forces. The Joint Staff is tasked to provide advice and support to the Chairman and the Joint Chiefs on matters including personnel, intelligence doctrine and architecture, operations and plans, logistics, strategy, policy, communications, cyberspace, joint training and education, and program evaluation. In addition to civilian personnel and contracted services, the Joint Staff comprises military personnel who represent, in approximately equal numbers, the Army, the Navy and Marine Corps, and the Air Force. The Office of the Secretary of the Army has sole responsibility within the Office of the Secretary and the Army Staff for the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Army Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Army. Headquarters functions to be performed by the Army Staff include, among others, recruiting, organizing, training, and equipping of the Army.the Secretary of the Army and the Army Staff comprise military and civilian personnel and contracted services. The staff of the Office of The Office of the Secretary of the Navy is solely responsible within the Office of the Secretary of the Navy, the Office of the Chief of Naval Operations, and the Headquarters, Marine Corps, for oversight of the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. The Office of the Chief of Naval Operations is to provide professional assistance to the Secretary and Chief of Naval Operations in preparing for the employment of the Navy in areas such as: recruiting, organizing, supplying, equipping, and training. The Marine Corps also operates under the authority, direction, and control of the Secretary of the Navy. Headquarters, Marine Corps, consists of the Commandant of the Marine Corps and staff who are to provide assistance in preparing for the employment of the Marine Corps in areas such as recruiting, organizing, supplying, equipping and training. The staffs of Office of the Secretary of the Navy, Office of the Chief of Naval Operations, and Headquarters, Marine Corps, comprise military and civilian personnel and contracted services. The Office of the Secretary of the Air Force has sole responsibility and oversight for the following functions across the Air Force: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Air Force. The headquarters functions to be performed by the Air Staff include recruiting, organizing, training, and equipping of the Air Force, among others.Secretary of the Air Force and the Air Staff comprise military and civilian personnel and contracted services. 10 U.S.C. § 8014. expenditures. In 2013, the Secretary of Defense set a target for reducing DOD components’ total management headquarters budgets by 20 percent for fiscal years 2014 through 2019, including costs for civilian personnel and contracted services, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. However, the department has not finalized its reduction plans. OSD experienced an overall increase in its authorized military and civilian positions from fiscal years 2001 through 2013, representing a net increase of 20 percent from 2,205 authorized positions in fiscal year 2001 to 2,646 authorized positions in fiscal year 2013. Since fiscal year 2011, OSD’s authorized positions have slightly decreased from their peak levels. The number of authorized military and civilian positions within the Joint Staff remained relatively constant since fiscal year 2005, the first year we could obtain reliable data, at about 1,262 authorized positions, with an increase in fiscal year 2012 to 2,599 positions, which Joint Staff officials said was associated with the realignment of duties from U.S. Joint Forces Command after its disestablishment.Staff trends are illustrated in figure 1. The military service secretariats and staffs also experienced varied increases in their number of authorized military and civilian positions from fiscal years 2001 through 2013.increases are attributed to increased mission responsibilities for the war and other directed missions such as business transformation, sexual assault response and prevention, and cyber. In addition, DOD officials said converting functions performed by contracted services to civilian positions, and the transfer of positions from other organizations also contributed to the increases. However, military service officials said that DOD-wide initiatives and service-specific actions since fiscal year 2010 have generally begun to slow these increases or resulted in declines, as illustrated in figure 3. DOD identified planned savings in its fiscal year 2015 budget submission, but it is unclear how the department will achieve those savings or how the reductions will affect the headquarters organizations in our review. In 2013, the Secretary of Defense set a target for reducing the headquarters budgets by 20 percent, to include costs for civilian personnel, contracted services, facilities, information technology, and other costs that support headquarters functions. DOD budget documents project the reductions will yield the department a total savings of about $5.3 billion from fiscal years 2015 through 2019, with most savings coming in 2019; however, specific details of the reductions through fiscal year 2019 were not provided. Moreover, in June 2014, we found that the starting point for the reductions was not clearly defined so it is difficult to assess whether these projected savings reflect meaningful savings when the reductions are a small portion of DOD’s budget.National Defense Authorization Act for Fiscal Year 2014 to report its DOD was required by Section 904 of the efforts to streamline management headquarters in June 2014.DOD provided Congress with an interim response stating that, due to the recent turnover of key staff, it would not develop its initial plan on streamlining until the end of summer 2014. As of December 2014, DOD’s plan had not been issued. Officials from the headquarters organizations in this review stated that they are using different processes to identify the 20 percent reductions to their operating budgets. DOD’s guidance called for components to achieve a 20 percent reduction to their headquarters operating budgets, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. According to DOD officials, this flexibility allows DOD components to determine the most cost-effective workforce—retaining military and civilian personnel while reducing dollars spent on contracted services. For example, OSD officials stated that the Under Secretaries of Defense were asked to strive for a goal of reducing their operating budgets by 20 percent. However, some OSD senior officials stated that it was unfair to smaller OSD offices, such as General Counsel, Public Affairs, and Legislative Affairs, to take the same reduction as the larger offices, and consequently OSD elected to take larger reductions from the larger offices of OSD Policy; Acquisitions, Technology and Logistics; Intelligence; and Personnel and Readiness. OSD officials added that they are in the process of determining how best to apply the budget reductions, preferably through attrition. Overall, DOD projected the reductions will result in at least $1 billion in savings for OSD’s headquarters over a 5-year period, but it is unclear what the size will ultimately be. The Joint Staff projects reductions of about $450,000 from fiscal year 2015 through fiscal year 2019. Joint Staff officials stated that they plan to reduce the number of authorized positions by about 150 civilian positions (about 14 percent of their fiscal year 2013 authorized civilian positions) and by about 160 military positions (about 11 percent of their fiscal year 2013 authorized military positions). Specifics about the plans for the military service secretariats and staffs were also in development, as of December 2014. Army officials estimate a reduction of about 560 civilian full-time equivalent positions in the Army Secretariat and Army Staff (about 21 percent of fiscal year 2013 authorized civilian positions); however, the officials said that the reductions in military positions will be determined through an Army review of military personnel in time for the fiscal year 2017 budget submission. Additionally, in July 2014, the Secretary of the Army announced plans for an additional review to determine the optimal organization and strength and, subsequently, any adjustment of programmed reductions in Headquarters, Department of the Army, that is to be completed by March 2015. Navy officials stated that the Navy will take 20 percent reductions in both civilian and military personnel, but the exact reductions through fiscal year 2019 would not be available before the issuance of the Section 904 report to Congress. A Marine Corps official stated that after submitting its fiscal year 2015 budget information, the Marine Corps conducted a structural review over a period of 6 to 8 months that identified a larger number of positions in Headquarters, Marine Corps, that should be subject to the reduction. The official further stated that these changes should better position the Marine Corps to more accurately report its headquarters structure for the fiscal year 2016 budget, but added that the actual reductions would likely be different than it originally estimated for fiscal year 2015. The revised Marine Corps data were not available as of January 2015. More specific information was available from the Air Force. In July 2014, the Air Force completed its management headquarters review and notified Congress of its reorganization plans, including a reduction of 300 authorized military and civilian positions (about 12 percent of fiscal year 2013 authorized positions) and a 20 percent reduction to the headquarters operating budgets for the Air Force Secretariat and Air Staff by fiscal year 2019. The headquarters organizations we reviewed—OSD, the Joint Staff, and the secretariats and staffs for the Army, Navy, and Air Force, and Headquarters, Marine Corps—do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess them as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted during the 1980s and 1990s to force efficiencies and reduce duplication. However, these limits have been waived since fiscal year 2002 and have little practical utility because of statutory exceptions to certain categories of personnel and because the limits do not include personnel in supporting organizations that perform headquarters-related functions. OSD, the Navy, and the Marine Corps have recognized problems with their existing requirements-determination processes and are beginning to take steps to modify their processes, but their efforts are not yet complete. Without systematic determinations of personnel requirements and periodic reassessments of them using organizational and workforce analyses, DOD will not be well-positioned to proactively identify efficiencies and limit personnel growth within these headquarters organizations. Moreover, until such requirements are determined, Congress will not have the information needed to reexamine existing statutory limits. Most of the DOD headquarters organizations that we reviewed are subject to statutory limits on the number of authorized personnel, although these limits have been waived since fiscal year 2002 and are of limited utility due to statutory exceptions and exclusions of certain personnel. Congress placed statutory limits on authorized military and civilian personnel for the military departments’ secretariats and staffs in 1986, in part, to force a comprehensive management review of duplication and identify effective solutions to existing personnel duplication among the services. In 1996, Congress also established a statutory limit for OSD military and civilian personnel because it was concerned about the growth of OSD personnel despite a declining defense budget and military force structure. The military departments’ statutory limits were set at 85 percent of the total number of personnel in the secretariats and military staffs prior to 1986, while the OSD statutory limit represented a 15 percent reduction from 1994 personnel levels. The Joint Staff is not currently subject to a statutory limit. Although Congress placed statutory limits on the OSD and the military departments’ secretariats and military staffs, the President has declared a national emergency each year from fiscal years 2002 to 2014, which had the effect of waiving the limits for the military departments each year.While the limits have been waived, officials from the Army, Navy, and Air Force stated that they seek to keep their number of authorized military and civilian positions within or close to these limits because the waiver is valid only for 1 year at a time, and they are uncertain whether the waiver will be granted again. However, we found the secretariats and military staffs of the departments of the Army and Navy have totals for fiscal year 2013 that would exceed the existing statutory limits were they in effect. Table 1 shows the statutory limits of the headquarters organizations that we reviewed and the total number of authorized positions they reported in fiscal year 2013, including, where applicable, the percentage by which they vary from the statutory limits. In addition, the numbers of authorized military and civilian positions counted against the statutory limits may not accurately reflect or be inclusive of all personnel supporting the headquarters due to statutory exceptions and the exclusion of certain personnel in support organizations conducting headquarters-related functions. Beginning in fiscal year 2009, Congress provided exceptions to the limitations on personnel for certain categories of acquisition personnel and for those hired pursuant to a shortage category designated by the Secretary of Defense or the Director of the Office of Personnel Management. These exceptions to the limitations on personnel allow DOD to adjust its baseline personnel limitation or exclude certain personnel from the limitation. For example, the Army reported for fiscal year 2015 that it has 1,530 military and civilian personnel that are subject to these exceptions and therefore do not count against its statutory limits. An official in OSD’s Office of the Under Secretary for Personnel and Readiness told us that the exceptions that were added to the statutory limits as of fiscal year 2009 make the statutory limits virtually obsolete. The statutory limits also do not apply to personnel in supporting organizations to the military service secretariats and staffs who do perform headquarters-related functions. For example, the Army and Air Force each have some personnel within their field operating agencies that support their military service secretariats or staffs in accomplishing their mission but which we found are not subject to the statutory limits. Organizations that support the Air Force Secretariat and Air Staff in conducting their mission include, but are not limited to, the U.S. Air Force Cost Analysis Agency, the U.S. Air Force Inspection Agency, the U.S. Air Force Personnel Center, and the U.S. Air Force Audit Agency, and include thousands of personnel. As illustrated in figure 4, in the case of the Army, the organizations and agencies that support the Army Secretariat and Army Staff are almost three times as large as the Secretariat and Staff, and include the U.S. Army Finance Command, the U.S. Army Manpower Analysis Agency, and the U.S. Army Force Management Support Agency, among others. By contrast, elements of the Washington Headquarters Services, a support organization for OSD, are included in OSD’s statutory limits. This means that some personnel in the Washington Headquarters Services who conduct management headquarters-related functions count toward OSD’s statutory limit. In addition, the applicable statute contains a provision limiting OSD’s ability to reassign functions; specifically, that DOD may not reassign functions solely in order to evade the personnel limitations required by the statute. The statutes governing personnel limitations for the military services’ secretariats and staffs do not contain similar limitations on the military services’ ability to reassign headquarters-related functions elsewhere. Military service officials have explained that the existing statutory limits preclude organizational efficiencies by causing them to move personnel performing headquarters- related functions elsewhere within the department, including the field operating agencies. In addition, DOD officials also stated the statutory limits may have unintended consequences, such as causing DOD to use contracted services to perform headquarters-related tasks when authorized military and civilian personnel are unavailable; this contractor work force is not subject to the statutory limits. We also found that Headquarters, Marine Corps, plans to revise the number of military and civilian personnel it counts against the statutory limits to exclude certain personnel. Officials in Headquarters, Marine Corps, said that, unlike their counterparts in the other three services, their headquarters is not entirely a management headquarters activity, because it incorporates some nonheadquarters functions for organizational and efficiency reasons, and thus the limits should not apply to those personnel. However, this planned change seems in contradiction with the intent of the statute to establish a limit on personnel within the Navy Secretariat, Office of the Chief of Naval Operations, and Headquarters, Marine Corps. Also, DOD Instruction 5100.73, Major DOD Headquarters Activities, states that Headquarters, Marine Corps, is a management headquarters organization in its entirety, which would include all its personnel and operating costs. Marine Corps officials told us that DOD plans to revise DOD Instruction 5100.73 to classify only certain functions within Headquarters, Marine Corps, as management headquarters activities. According to an official, Headquarters, Marine Corps,’ personnel totals in fiscal year 2013 do not reflect these changes and may account for the large percentage difference between the existing statutory limits and the number of Navy and Marine Corps authorized personnel in fiscal year 2013. An official from the Department of the Navy also noted that they have not reexamined the number of personnel who would fall under the statutory limits since the limit was first waived in September 2001. According to internal-control standards for the federal government, information should be recorded and communicated to others who need it in a form that enables them to carry out their responsibilities. An organization must have relevant, reliable, and timely communications as well as information needed to achieve the organization’s objectives. However, DOD’s headquarters reporting mechanism to Congress, the Defense Manpower Requirements Report, reflects a lack of key information. This annual report to Congress includes information on the number of military and civilian personnel assigned to major DOD headquarters activities in the preceding fiscal year and estimates of such numbers for the current and subsequent fiscal year, as well as the amount of any adjustment in personnel limits made by the Secretary of Defense or the secretary of a military department. However, in the most recent report for fiscal year 2015, only the Army reports information on the number of baseline personnel within the Army Secretariat and Army Staff that count against the statutory limits, along with the applicable adjustments to the limits. Similar information for OSD, the Air Force Secretariat and Air Staff, the Navy Secretariat, the Office of the Chief of Naval Operations, and Headquarters, Marine Corps, is not included because DOD’s reporting guidance does not require this information. Without information to identify what personnel in each organization are being counted against the statutory limits, it will be difficult for Congress to determine whether the existing statutory limits are effective in limiting personnel growth within the department or should be revised to reflect current requirements. While the organizations we reviewed are currently assessing their personnel requirements—driven by department-wide efforts to reduce management overhead in response to budget constraints—we found that all of the headquarters organizations within our review have not determined their personnel requirements as part of a systematic requirements-determination process. Such systematic personnel- requirements processes are considered a good human-capital practice across government, including DOD, and these processes include certain key elements. Among these elements are that organizations should (1) identify an organization’s mission, functions, and tasks; and (2) determine the minimum number and type of personnel—military personnel, civilian personnel, and contracted services—needed to fulfill those missions, functions, and tasks by conducting a workforce analysis. Such a workforce analysis should identify mission-critical competencies as well as gaps and deficiencies, and systematically define the size of the total workforce needed to meet organizational goals. By contrast, the headquarters organizations we reviewed use authorized personnel levels from the previous year as a baseline from which to generate any new requirements, and these personnel levels are ultimately based not on a workforce analysis but on the statutory limits that were established by Congress in the 1980s and 1990s. According to DOD officials, it is more difficult to determine personnel requirements for OSD, military service secretariats, or military staffs, whose tasks include developing policy or strategy, than it is for military services’ major commands or units that have distinct tasks, such as repairing aircraft or conducting ship maintenance. DOD officials stated that headquarters organizations’ workload is unpredictable and not only includes traditional policy and oversight responsibilities, but also managing unforeseen events and initiatives, such as the Fort Hood shooting, Secretary of Defense-directed reductions, and responding to congressionally mandated reviews or reports. However, systematically determining personnel requirements for the total force—military personnel, civilian personnel, and contracted services—by conducting a workforce analysis, rather than relying on historic personnel levels and existing statutory limits, would better position these headquarters organizations to respond to unforeseen events and initiatives by allowing them to identify critical mission requirements as well as mitigate risks to the organizations’ efficiency and effectiveness. Without such determination of personnel requirements for the total force, DOD headquarters organizations may not be well positioned to identify opportunities for efficiencies and reduce the potential for headquarters- related growth. In addition, submitting these personnel requirements to Congress would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. In addition to not systematically determining their personnel requirements, we also found that the headquarters organizations do not have procedures in place to ensure that they periodically reassess these personnel requirements. This is contrary to guidance from DOD and all of the military services suggesting that they conduct periodic reassessments of their personnel requirements. For example, DOD guidance states that existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources, and that assigned missions should be accomplished using the least costly mix of military, civilian and contractor personnel. Moreover, the military services have more specific guidance indicating that personnel requirements should be established at the minimum essential level to accomplish the required workload and should be periodically reviewed. For example, the Air Force states that periodic reviews should occur at least every 2 years. In addition, systematic personnel requirements processes are considered a good human-capital practice across government, including in DOD. These practices call for organizations to have personnel requirements-determination processes that, among other things, reassess personnel requirements by conducting analysis on a periodic basis to determine the most efficient choices for workforce deployment. These reassessments should include analysis of organizational functions to determine appropriate structure, including identifying any excess organizational layers or redundant operations, and workforce analysis to determine the most effective workloads for efficient functioning. None of the headquarters organizations we reviewed have procedures in place to ensure that they periodically reassess their personnel requirements. This is unlike the military services’ major commands or units, for which officials within the military departments stated they do reassess personnel requirements. While Navy officials stated that the Navy may occasionally reassess the requirements for a particular organization within the Secretariat or Office of the Chief of Naval Operations, such reassessments are conducted infrequently and without the benefit of a standardized methodology. Officials at Headquarters, Marine Corps, stated that they are beginning to implement a new requirements-determination process, which requires commanders to conduct an annual analysis to determine their organizations’ personnel requirements. However, this process is not expected to be fully implemented until October 2015. Officials from headquarters organizations that we reviewed said that they do not periodically reassess personnel requirements because their organization’s requirements do not change much from year to year and they adjust requirements when new missions or tasks are assigned to their organization. DOD officials also maintained that the process of reassessing these personnel requirements would be lengthy and require an increase in personnel to conduct the analysis. Officials also stated that they believe the department’s recent efficiency efforts have allowed their organizations to reassess personnel requirements and identify opportunities for efficiencies. For example, officials stated that they conducted comprehensive reviews of their organizations’ personnel requirements as part of the effort to identify efficiencies as directed by former Secretary of Defense Robert Gates in 2010, as part of the OSD organizational review conducted by former Secretary of the Air Force Mike Donley in 2013, and most recently as part of Secretary of Defense Chuck Hagel’s effort to reduce management headquarters. However, these reviews have generally been ad hoc and done in response to internally driven or directed reductions, rather than as part of the organization’s systematic requirements-determination process. Conducting periodic reassessments as part of a systematic requirements- determination process, rather than in response to various DOD-directed efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations and to inform decision making during any future efficiency efforts and budget reviews. In addition, reassessments of personnel requirements could occur periodically, not necessarily annually, thereby lessening the amount of time and labor that headquarters organizations devote to conducting reassessments. For example, Army guidance states that such reassessments should occur every 2 to 5 years. Without periodic reassessment of personnel requirements for the total force, it will be difficult for the headquarters organizations in our review to be well positioned to effectively identify opportunities for efficiencies and limit personnel growth. All but one of the organizations we reviewed have recognized problems with requirements determination and some are beginning to take steps to modify their related processes, but these efforts are not yet complete. For example, OSD conducted a set of studies, directed by the Secretary of Defense in December 2013, aimed at further improving management and administration of personnel. According to OSD officials, the data and insights from these studies will inform DOD-wide business process and system reviews being directed by the Deputy Secretary of Defense. For example, officials stated that an OSD-wide process for determining and reassessing personnel requirements may replace the current process whereby each OSD office sets its personnel requirements individually. OSD officials also stated that the new process, if implemented, might include a standard methodology to help OSD conduct a headquarters workforce analysis and determine and periodically reassess its personnel requirements. DOD did not provide a time frame for implementing the results of the studies and did not confirm whether implementation would include establishment of an OSD-wide personnel requirements- determination process. Department of the Navy, Navy Shore Manpower Requirements Determination Final Report (revised July 17, 2013). methodology for analyzing workload and determining and assessing personnel requirements. Based on this report, the Navy is conducting its own review of the shore personnel requirements-determination process, with the goal of establishing guidance for use in 2015. In 2011, the Marine Corps developed a standardized approach, known as the Strategic Total Force Management Planning process, for determining and reassessing headquarters personnel requirements on an annual basis. According to Marine Corps officials and guidance, this process requires commanders to annually assess their organization’s mission, analyze its current and future organizational structures, conduct a gap analysis, and develop, execute, and monitor a plan of action to address any gaps. The Marine Corps is currently revising its guidance to reflect this new process, and commanders are not required to develop their requirements and submit an action plan until October 2015. Despite these efforts, none of these processes have been fully implemented or reviewed. Therefore, it is too early to know whether the new processes will reflect the key elements of a personnel requirements-determination process by enabling the organizations to identify missions, systematically determine personnel requirements, and reassess them on a periodic basis using organizational and workforce analysis. Over the past decade, OSD, the Joint Staff, and the military service secretariats and staffs have grown to manage the increased workload and budgets associated with a military force engaged in conflict around the world. Today, DOD is facing a constrained budget environment and has stated that it needs to reduce the size of its headquarters, to include all components of its workforce–military personnel, civilian personnel, and contracted services. DOD and the military services have undertaken reviews to reduce headquarters but these budget-driven efforts have not been the result of systematic determinations of personnel needs. Statutory limits on these headquarters have been waived since 2002, but these limits would likely be counterproductive today were the waiver dropped, because they were set in the 1980s and 1990s and are inconsistently applied due to statutory exceptions and DOD’s exclusion of personnel conducting headquarters-related functions. Specifically, these limits omit personnel in supporting organizations to the military service secretariats and staffs that perform headquarters-related functions. Because of these exceptions and omissions, the statutory limits may be of limited utility in achieving Congress’s original aim of stemming the growth of headquarters personnel and reducing duplication of effort. The existing statutory limits encourage the headquarters organizations to manage the number of military and civilian personnel requirements at or near the limit, according to DOD officials, rather than using a systematic requirements-determination process that establishes the total force that is truly needed and whether any efficiencies can be identified. Headquarters organizations in our review have not systematically determined how many personnel they need to conduct their missions. While some organizations have begun to take such steps, their plans are not firm and their processes have not been finalized. Unless the organizations conduct systematic analyses of their personnel needs for the total force and establish and implement procedures to ensure that they periodically reassess those requirements, the department will lack assurance that its headquarters are sized appropriately. Looking to the future, systematically determining personnel requirements and conducting periodic reassessments could inform decision making during any future efficiency efforts and support budget formulation. In addition, determining these personnel requirements and submitting the results to Congress as part of DOD’s Defense Manpower Requirements Report or through separate correspondence, along with any recommendations about adjustments needed to the statutory limits, could form a foundation upon which Congress could reexamine the statutory limits, as appropriate. To ensure that headquarters organizations are properly sized to meet their assigned missions and use the most cost-effective mix of personnel, and to better position DOD to identify opportunities for more efficient use of resources, we recommend that the Secretary of Defense direct the following three actions: conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks; submit these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limits, in the next Defense Manpower Requirements Report to Congress or through separate correspondence, along with any recommendations needed to modify the existing statutory limits; and establish and implement procedures to conduct periodic reassessments of personnel requirements within OSD and the military services’ secretariats and staffs. Congress should consider using the results of DOD’s review of headquarters personnel requirements to reexamine the statutory limits. Such an examination could consider whether supporting organizations that perform headquarters functions should be included in statutory limits and whether the statutes on personnel limitations within the military services’ secretariats and staffs should be amended to include a prohibition on reassigning headquarters-related functions elsewhere. We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report, DOD partially concurred with the three recommendations and raised concerns regarding what it believes is a lack of appropriate context in the report. DOD’s comments are summarized below and reprinted in their entirety in appendix IX. In its comments, DOD raised concerns that the report lacks perspective when characterizing the department’s headquarters staff, stating that it is appropriate for the department to have a complex and multi-layered headquarters structure given the scope of its missions. We agree that DOD is one of the largest and most complex organizations in the world, and make note of its many broad and varied responsibilities in our report. Notwithstanding these complexities, the department itself has repeatedly recognized the need to streamline its headquarters structure. For example, in 2010, the Secretary of Defense expressed concerns about the dramatic growth in DOD’s headquarters and support organizations that had occurred since 2001, and initiated a series of efficiency initiatives aimed at stemming this growth. The Secretary of Defense specifically noted the growth in the bureaucracy that supports the military mission, especially the department’s military and civilian management layers, and called for an examination of these layers. In addition, in January 2012, the administration released defense strategic guidance that calls for DOD to continue to reduce the cost of doing business, which includes reducing the rate of growth in personnel costs and finding further efficiencies in overhead and headquarters, in its business practices, and in other support activities. Our report discusses some of the department’s efficiency-related efforts and thus, we believe it contains appropriate perspective. DOD also expressed concerns that the report lacks appropriate context when addressing the causes for workforce growth, stating that such growth was in response to rapid mission and workload increases, specific workforce-related initiatives, realignments, streamlining operations, and reducing redundancies and overhead. Our draft report noted some of these causes of headquarters workforce growth, but we have added additional information to the report on other causes, such as increased mission responsibilities for the war and other directed missions such as business transformation, intelligence, cyber, suicide prevention, sexual assault response and prevention, wounded warrior care, family support programs, transition assistance and veterans programs, to provide context and address DOD’s concerns. DOD partially concurred with the first recommendation that the Secretary of Defense direct a systematic determination of the personnel requirements of OSD, the Joint Staff, and the military services’ secretariats and staffs, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks. The department noted in its letter that it will continue to use the processes and prioritization that is part of the Planning, Programming, Budgeting, and Execution process, and will also investigate other methods for aligning personnel to missions and priorities. DOD also stated that it is currently conducting Business Process and System Reviews of the OSD Principal Staff Assistants, defense agencies, and DOD field activities to aid in documenting mission responsibilities to resource requirements. However, the department did not provide any details specifying whether any of these actions would include a workforce analysis to systematically determine personnel requirements, rather than continuing to rely on historic personnel levels and existing statutory limits as the basis for those requirements, nor does the department acknowledge the need for such analysis. Moreover, according to DOD’s implementation guidance for the Business Process and Systems Review, which we reference in our report, this review is focused on business processes and supporting information technology systems within certain defense headquarters organizations, rather than a systematic determination of personnel requirements for those organizations. DOD also stated in its comments that headquarters staff provide knowledge continuity and subject matter expertise and that a significant portion of their workload is unpredictable. We agree, but believe that headquarters organizations would be better positioned to respond to unforeseen events and initiatives if their personnel requirements were based on workforce analysis, which would allow them to identify critical mission requirements as well as mitigate risks to the organizations’ efficiency and effectiveness while still responding to unpredictable workload. Without a systematic determination of personnel requirements, DOD headquarters organizations may not be well positioned to identify opportunities for efficiencies and reduce the potential for headquarters-related growth. Several headquarters organizations provided comments on their specific requirements determination processes in connection with this first recommendation. The Army noted that it has an established headquarters requirements determination process in the G-3, supported by the U.S. Army Manpower Analysis Agency. While the Army does have a requirements determination process, we note in our report that this process did not result in the systematic determination of requirements for the Army Secretariat and Staff; rather, the Army headquarters organizations we reviewed use authorized personnel levels from the previous year as a baseline from which to generate any new requirements, and these personnel levels are ultimately based not on a workforce analysis, but on the statutory limits that were established by Congress in the 1980s. In addition, while the Army’s requirements determination process does call for reassessments of personnel requirements every 2 to 5 years, Army officials stated that they do not conduct these periodic reassessments of the personnel requirements for the Army headquarters organizations in our review, in part because the U.S. Army Manpower Analysis Agency lacks the authority to initiate such reassessments or enforce their results. In the letter, the Army also noted concerns that a statement in our draft report—namely, that the organizations that support the Army Secretariat and staff are almost three times as large but are excluded from the statutory limits—may be misleading and lack appropriate context. In response to the Army’s concerns and to provide additional context, we have clarified the report’s language to state that only some personnel in these organizations support their military service secretariats and staffs in accomplishing their mission and are not subject to the statutory limits. The Marine Corps noted that they conducted a full review of force structure in 2012, which included a Commandant-directed consideration to look at the functions of every headquarters and staff. We state in our report that the Marine Corps and others in the department have previously conducted efficiency-related efforts, which officials believe have allowed their organizations to reassess personnel requirements and identify opportunities for efficiencies. However, these reviews have generally been ad hoc and done in response to internally driven or directed reductions, rather than as part of an organization’s systematic requirements-determination process. Having workforce and organizational analyses as part of a systematic requirements- determination process, rather than in response to DOD-directed efficiency efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations and inform decision making during future efficiency efforts and budget reviews. Finally, the Joint Staff stated that it utilizes its existing Joint Manpower Validation Process as a systematic requirements determination process when requesting permanent joint manpower requirements, adding that this process reviews mission drivers, capability gaps, impact assessments, and determines the correct size and characteristics of all new billets. However, as we found in May 2013, this process focuses on requests for additional positions or nominal changes in authorized positions, rather than evaluating whether authorized positions are still needed to support assigned missions. Moreover, we found that personnel levels for the headquarters organizations that we reviewed, including the Joint Staff, are ultimately not based on a workforce analysis that systematically defines the size of the total workforce needed to meet organizational goals. Rather, these organizations use authorized personnel levels from the previous year as a baseline and do not take steps to systematically determine and periodically reassess them. Thus, we continue to believe that DOD should conduct a systematic determination of personnel requirements, including an analysis of missions, functions, and tasks to determine the minimum personnel needed to accomplish those missions, functions, and tasks. DOD partially concurred with the second recommendation that the Secretary of Defense direct the submission of these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limit, in the next Defense Manpower Requirements Report to Congress or through separate correspondence, along with any recommendations needed to modify the existing statutory limits. DOD stated that it has ongoing efforts to refine and improve its reporting capabilities associated with these requirements, noting that the department has to update DOD Instruction 5100.73, Major DOD Headquarters Activities before it can determine personnel requirements that count against the statutory limits. In March 2012, we recommended that DOD revise DOD Instruction 5100.73, Major DOD Headquarters Activities, but DOD has not provided an estimate of when this revised Instruction would be finalized. DOD also did not indicate in its letter whether the department would submit personnel requirements that count against the statutory limits in the Defense Manpower Requirements Report, as we recommend, once the Instruction is finalized. We believe that submitting these personnel requirements to Congress in this DOD report would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. In addition, the Marine Corps provided more specific comments in connection with the second recommendation, noting that in 2014 it had reviewed and validated all headquarters down to the individual billet level, identifying billets that should be coded as performing major DOD headquarters activities, resulting in a net increase of reported headquarters structure. The Marine Corps stated they planned to report this information as part of DOD’s fiscal year 2016 budget and in the Defense Manpower Requirements Report. Our report specifically notes the review and the Marine Corps effort to more accurately report its headquarters structure for the fiscal year 2016 budget. However, until the department as a whole takes concrete steps to gather reliable information about headquarters requirements, and report this information to Congress, neither the department nor Congress will have the information needed to oversee them. DOD partially concurred with the third recommendation that the Secretary of Defense direct the establishment and implementation of procedures to conduct periodic reassessments of personnel requirements within OSD and the military service secretariats and staffs. DOD said that it supports the intent of the recommendation, but such periodic reassessments require additional resources and personnel, which would drive an increase in the number of personnel performing major DOD headquarters activities. Specifically, DOD stated it intends to examine the establishment of requirements determination processes across the department, to include the contractor workforce, but this will require a phased approach across a longer timeframe. However, DOD also did not provide any estimated timeframes for its examination of this process. As we noted in the report, reassessments of personnel requirements could occur periodically, not necessarily annually, thereby lessening the amount of time and labor that headquarters organizations devote to conducting reassessments. Further, until a periodic reassessment of requirements takes place, the department will lack reasonable assurance that its headquarters are sized appropriately for its current missions, particularly in light of the drawdown from Iraq and Afghanistan and its additional mission responsibilities. In addition, the Marine Corps and the Joint Staff provided specific comments in connection with the third recommendation in DOD’s letter. First, the Marine Corps noted that they conduct periodic reviews through the Quadrennial Defense Review and through force structure review boards that shape the Marine Corp to new missions and in response to combatant commander demands. However, these reviews are focused on forces as a whole and not specifically on headquarters. Second, the Joint Staff stated that it has set personnel requirements twice since 2008, and noted that it has taken reductions during various budget- or efficiency- related efforts, such as the Secretary of Defense’s 2012 efficiency review and the Secretary of Defense’s 20-percent reductions to headquarters budgets, which is ongoing. However, conducting periodic reassessments as part of a systematic requirements-determination process, rather than in response to ad hoc, DOD-directed efficiency efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations. This, in turn, would prepare the headquarters organizations to better inform decision-making during any future efficiency efforts and budget reviews. DOD stated that, although it appreciates our inclusion in the report of a matter calling for Congress to consider using the results of DOD’s review of personnel requirements to re-examine the statutory limits, it believes any statutory limitations on headquarters personnel place artificial constraints on workforce sizing and shaping, thereby precluding total force management. Therefore, DOD states that it opposes any legislative language that imposes restrictions on the size of the department’s workforce. Both the Marine Corps and Joint Staff provided specific comments in regard to GAO’s matter for congressional consideration, although these comments were directed toward the specific statutory limits for their organizations, not the GAO matter for congressional consideration itself. As we noted in our report, we believe that the statutory limits are of limited utility. The intent of this matter is to not to prescribe specific modifications to the statutory limits on headquarters personnel to Congress but rather to suggest that Congress consider making those modifications that it considers most appropriate based on a review of personnel requirements provided by the department. Finally, the Army also provided input regarding the overall methodology behind the report, noting that tracking contract support of headquarters organizations solely through funding source may skew attempts at general trend analysis because funding source does not always correlate to a function being performed in the headquarters. Our report notes some of the challenges in tracking contract support of headquarters organizations, but to add context and address the Army’s concerns, we have modified text in Appendix V, which focuses on the resources of the Headquarters, Department of the Army. Specifically, we have modified Figure 12 to note that, according to Army officials, the costs for contracted services provided from its financial accounting systems may not accurately reflect costs incurred by the headquarters because the accounting systems show the funding for contractors but not necessarily where the contracted work was performed, which is the information displayed in DOD’s Inventory of Contracted Services. DOD also provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. We have issued several reports since 2012 on defense headquarters and on the department’s ability to determine the right number of personnel needed to perform headquarters functions. In March 2012, we found that while the Department of Defense (DOD) has taken some steps to examine its headquarters resources for efficiencies, additional opportunities for savings may exist by further consolidating organizations and centralizing functions. We also found that DOD’s data on its headquarters personnel lacked the completeness and reliability necessary for use in making efficiency assessments and decisions. In that report, we recommended that the Secretary of Defense direct the Secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate commands and to centralize administrative and command support services, functions, or programs. Additionally, we recommended that the Secretary of Defense revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all headquarters organizations; specify how contractors performing headquarters functions will be identified and included in headquarters reporting; clarify how components are to compile the information needed for headquarters-reporting requirements; and establish time frames for implementing actions to improve tracking and reporting of headquarters resources. DOD generally concurred with the findings and recommendations in our March 2012 report. DOD officials have stated that, since 2012, several efforts have been made to consolidate or eliminate commands and to centralize administrative and command support services, functions, or programs. For example, OSD officials said that DOD has begun efforts to assess which headquarters organizations are not currently included in its guiding instruction on headquarters, but as of July 2014, it has not completed its update of the instruction to include these organizations. DOD officials also identified further progress on including contractors performing major DOD headquarters activities in headquarters reporting. In May 2013, we found that authorized military and civilian positions at the geographic combatant commands—excluding U.S. Central Command—had increased by about 50 percent from fiscal year 2001 through fiscal year 2012, primarily due to the addition of new organizations, such as the establishment of U.S. Northern Command and U.S. Africa Command, and increased mission requirements for the theater special operations commands. We also found that DOD’s process for sizing its combatant commands had several weaknesses, including the absence of a comprehensive, periodic review of the existing size and structure of these commands and inconsistent use of personnel-management systems to identify and track assigned personnel. DOD did not concur with our recommendation that it conduct comprehensive and periodic reviews of the combatant commands’ existing size, but we continue to believe that institutionalizing a periodic evaluation of all authorized positions would help to systematically align manpower with missions and add rigor to the requirements process. DOD concurred with our recommendation that it revise its guiding instruction on managing joint personnel requirements—Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program—to require the commands to improve its visibility over all combatant command personnel. DOD has established a new manpower tracking system, the Fourth Estate Manpower Tracking System, that is to track all personnel data, including temporary personnel, and identify specific guidelines and timelines to input/review personnel data. Additionally, DOD concurred with our recommendation to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands and to revise DOD’s Financial Management Regulation. As of September 2014, the process outlined by DOD to gather information on authorized and assigned personnel at the service component commands is the same as the one identified during our prior work. DOD concurred with our recommendation to revise volume 2A, chapter 1 of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters support, such as civilian pay, contracted services, travel, and supplies. As of September 2014, DOD plans to prepare an exhibit that reflects the funding and full-time equivalent information by combatant command and include it in an update to the DOD Financial Management Regulation prior to preparation of the fiscal year 2016 budget estimate submission. In June 2014, we found that DOD’s functional combatant commands have shown substantial increases in authorized positions and costs to support headquarters operations since fiscal year 2004, primarily to support recent and emerging missions, including military operations to combat terrorism and the emergence of cyberspace as a warfighting domain. Further, we found that DOD did not have a reliable way to determine the resources devoted to management headquarters as a starting point for DOD’s planned 20 percent reduction to headquarters budgets, and thus we concluded that actual savings would be difficult to track. We recommended that DOD reevaluate the decision to focus reductions on management headquarters to ensure meaningful savings and set a clearly defined and consistently applied baseline starting point for the reductions. Further, we recommended that DOD track the reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. DOD partially concurred with our recommendation to reevaluate its decision to focus reductions on management headquarters, questioning, in part, the recommendation’s scope. We agreed that the recommendation has implications beyond the functional combatant commands, which was the scope of our review, but the issue we identified is not limited to these commands. DOD generally concurred with our two other recommendations that it set a clearly defined and consistently applied baseline starting point and track reductions against the baselines. To address these two recommendations, DOD said that it planned to use the Future Years Defense Program data to set the baseline going forward. DOD stated that it was enhancing data elements within a DOD resource database to better identify management headquarters resources to facilitate tracking and reporting across the department. House Report 113-102 mandated GAO to review the military, civilian personnel, and contracted services resources devoted to the Office of the Secretary of Defense (OSD), the Joint Staff, and the military departments’ secretariats and military staffs from fiscal year 2001 through fiscal year 2013. This report (1) identifies past trends, if any, in personnel resources devoted to OSD, the Joint Staff, and the secretariats and staffs of the military services and any plans for reductions to these headquarters organizations; and (2) evaluates the extent to which the Department of Defense (DOD) determines and reassesses personnel requirements for these headquarters organizations. In addition to OSD, the Joint Staff, and the secretariats and staffs of the military departments, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. Joint Staff J-2 (Intelligence), which receives its personnel and funding from the Defense Intelligence Agency, provided personnel data that it deemed sensitive but unclassified, so we excluded it from this report. The Navy was unable to provide complete personnel data prior to fiscal year 2005 due to a change in personnel management systems used by the Office of the Chief of Naval Operations. Similarly, Headquarters, Marine Corps, was unable to provide personnel data prior to fiscal year 2005 due to a change in personnel management systems. We requested available data on contracted services performing functions for the organizations within our review, but we were only able to obtain and analyze information from OSD and the Army. We compared these data to data we had obtained from OSD and the Army on authorized military and civilian positions. We present DOD data on contracted services for context as a comparison against authorized military and civilian positions. Because we did not use these data to support our findings, conclusions, or recommendations, we did not assess their reliability. DOD is still in the process of compiling complete data on contractor full-time equivalents. Our review also focused on operation and maintenance obligations— because these obligations reflect the primary costs to support the headquarters operations of OSD, the Joint Staff, and secretariats and staffs for the military services—including the costs for civilian personnel, contracted services, travel, and equipment, among others. Our review excluded obligations of operation and maintenance funding for DOD’s overseas contingency operations that were not part of DOD’s base budget. Unless otherwise noted, we reported all costs in this report in nominal dollars. Only the Air Force was able to provide historical data for the entire fiscal year 2001 through fiscal year 2013 time frame, so we provided an analysis of trends in operation and maintenance obligations at the individual organizations included in our review for the fiscal years for which data were available. OSD was unable to provide cost data prior to fiscal year 2008 because, per National Archives and Records Administration regulations, it does not maintain financial records older than 6 years and 3 months. The Joint Staff was unable to provide cost data prior to fiscal year 2003 due to a change in financial systems. The Army was unable to provide cost data for fiscal year 2001 in the time frame we requested for inclusion in this report. The Navy Secretariat was able to provide cost data for fiscal years 2001 through 2013. However, the Office of the Chief of Naval Operations was only able to provide cost data for fiscal years 2009 through 2013 because the Office of the Chief of Naval Operations did not exist as an independent budget-submitting office until fiscal year 2009, and it would be difficult to separate out the Office of the Chief of Naval Operations’ data from other Navy data prior to fiscal year 2009 in the Navy’s historical data system. Headquarters, Marine Corps, was unable to provide cost data prior to fiscal year 2005 due to a change in financial systems. Our analyses are found in appendixes III through VIII. The availability of historical data limited our analyses of both authorized military and civilian positions and operation and maintenance obligations for the reasons identified by the individual included organizations. To assess the reliability of the data we collected, we interviewed DOD officials about the data they provided to us and analyzed relevant personnel and financial-management documentation to ensure that the data on authorized military and civilian positions and operation and maintenance obligations were tied to mission and headquarters support. We also incorporated data-reliability questions into our data-collection instruments and compared the multiple data sets received from the included organizations against each other to ensure that there was consistency in the data that they provided. We determined the data were sufficiently reliable for our purposes of identifying trends in the personnel resources and headquarters support costs of OSD, the Joint Staff, and secretariats and staffs for the military services. To identify DOD’s plans for reductions to these headquarters organizations, we obtained and reviewed guidance and documentation on steps to implement DOD’s 20 percent reductions to headquarters budgets starting in fiscal year 2015, the first full budget cycle for which DOD was able to include the reductions, such as the department-issued memorandum outlining the reductions and various DOD budget-related documents. We also obtained data, where available, on the number of positions at OSD, the Joint Staff, and the secretariats and staffs for the military services for fiscal year 2013 (the most recent fiscal year for which data were available during our review), as well as the number of positions deemed by these organizations to be performing headquarters functions and included in DOD’s planned headquarters reductions for fiscal years 2015 through 2019, the time frame DOD identified in its reduction plans. We assessed the reliability of the personnel and cost data given these and other limitations by interviewing DOD officials about the data they provided to us and analyzing relevant personnel and financial- management documentation. We determined that the data were sufficiently reliable for our purposes of identifying trends in the personnel resources and headquarters support costs, and DOD’s plans for reductions to OSD, the Joint Staff, and secretariats and staffs for the military services. To evaluate the extent to which DOD determines and reassesses personnel requirements for these headquarters organizations, we obtained and reviewed guidance from OSD, the Joint Staff, and the secretariats and staffs for the military services regarding each of their processes for determining and reassessing their respective personnel requirements. For example, we reviewed the Chairman of the Joint Chiefs of Staff Instruction 1001.01A (Joint Manpower and Personnel Program); Air Force Instruction 38-201 (Manpower and Organization, Management of Manpower Requirements and Authorizations); Army Regulation 570-4 (Manpower and Equipment Control, Manpower Management); Office of the Chief of Naval Operations Instruction 1000.16K (Navy Total Force Manpower Policies and Procedures); and Marine Corps Order 5311.1D (Total Force Structure Process). We also interviewed officials from each of these organizations to determine how their processes are implemented, the results of any studies that were conducted on these processes, and any changes being made to these processes. We then compared the information we obtained on these processes to key elements called for in DOD Directive 1100.4 (Guidance for Manpower Management) and the military services’ guidance we had previously obtained; specifically, that personnel requirements should be established at the minimum essential level to accomplish the required workload, and should be periodically reviewed. We also compared this information to key elements of a systematic personnel requirements-determination process, which we obtained from documents that address leading practices for workforce planning. Specifically, we reviewed prior GAO work on effective strategic workforce planning, DODs guidance on manpower management, and workforce planning guidance issued by the Office of Personnel Management. We then synthesized common themes from these documents and summarized these as key elements that should be included in organizations’ personnel requirements- determination processes, namely, that an organization should have a requirements process that identifies the organization’s mission, functions, and tasks; determines the minimum number and type of personnel needed to fulfill those missions, functions, and tasks by conducting a workforce analysis; and reassesses these requirements on a periodic basis to determine the most efficient choices for workforce deployment. We also reviewed DOD Instruction 5100.73 (Major DOD Headquarters Activities), which guides the identification and reporting of headquarters information. Finally, we identified a standard on information and communications from internal-control standards for the federal government and compared this standard to the headquarters-related information provided to Congress in the fiscal year 2015 Defense Manpower Requirements Report. We obtained and assessed data on the number of management headquarters personnel in the organizations in our review for fiscal year 2013 and on the Army’s field operating agencies for fiscal years 2001 through 2013. We assessed the reliability of the personnel data through interviews with Army officials about the data they provided to us and by conducting data-reliability assessments of the Army personnel data and the information systems that produced them. We determined that the data were sufficiently reliable for our purposes. We also met with OSD and the military services to discuss how these organizations identify these headquarters personnel. Finally, we reviewed the legislative history of the statutory personnel limitations for OSD, the Joint Staff, and the services contained in sections 143, 155, 3014, 5014, and 8014 of Title 10 of the U.S. Code, and discussed these limits with knowledgeable officials in OSD, the Joint Staff, and the military services. We interviewed officials or, where appropriate, obtained documentation from the organizations listed below: Office of the Secretary of Defense Office of the Director of Administration and Management; Office of Cost Assessment and Program Evaluation; and Washington Headquarters Services, Financial Management Directorate. Directorate of Management, Comptroller; Manpower and Personnel Directorate; and Intelligence Directorate. Department of the Air Force A1, Joint and Special Activities Manpower Programming Branch. Assistant Secretary of the Army for Manpower and Reserve Affairs; G8, Program Analysis and Evaluation; and Business Operations Directorate, Army Office of Business Transformation. Assistant Secretary of the Navy for Manpower and Reserve Assistant for Administration; Office of the Chief of Naval Operations, Deputy Chief of Naval Operations for Integration of Capabilities and Resources, Programming Division; Office of the Chief of Naval Operations, Manpower Management; Office of the Chief of Naval Operations, Assessment Division; and U.S. Fleet Forces Command. Headquarters, U.S. Marine Corps Marine Corps Combat Development Command, Combat Development and Integration / Total Force Structure Division; Budget and Execution Division, Programs and Resources; and Manpower and Reserve Affairs. We conducted this performance audit from July 2013 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Resources of the Office of the Secretary of Defense (OSD) OSD is responsible for assisting the Secretary of Defense in carrying out his or her duties and responsibilities for the management of the Department of Defense (DOD). These include policy development, planning, resource management, and fiscal and program evaluation responsibilities. The staff of OSD comprises military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the OSD organization, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 2 shows the organizational structure and composition of OSD for fiscal year 2013, including both authorized military and civilian positions, as well as estimated contractor full-time equivalents. Figure 5 illustrates annual changes in the number of authorized personnel positions since fiscal year 2001. According to DOD officials, both authorized military and civilian positions remained relatively unchanged until fiscal year 2010, when the number of authorized civilians increased mainly due to the conversion of contracted services to civilian positions and the conversion of military to civilian positions. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Figure 6 shows the headquarters support costs changes associated with OSD for fiscal year 2008 through fiscal year 2013. Headquarters costs have experienced an overall increase during the 5-year period, primarily due to costs for contracted services, but have recently begun to decline, according to OSD officials, because of sequestration and furloughs. The Joint Staff is responsible for assisting the Chairman of the Joint Chiefs of Staff, military advisor to the President, in accomplishing his responsibilities for the unified strategic direction of the combatant forces; their operation under unified command; and their integration into a team of land, naval, and air forces. The Joint Staff is tasked to provide advice and support to the Chairman and the Joint Chiefs on matters including personnel, intelligence doctrine and architecture, operations and plans, logistics, strategy, policy, communications, cyberspace, joint training and education, and program evaluation. In addition to civilian personnel and personnel performing contracted services, the Joint Staff comprises military personnel who represent, in approximately equal numbers, the Army, Navy and Marine Corps, and Air Force. This appendix shows how these resources are distributed in the Joint Staff, as well as the changes in these resources from fiscal year 2003 through fiscal year 2013. Table 3 shows the organizational structure and composition of the Joint Staff for fiscal year 2013, including both authorized military and civilian positions. Figure 7 illustrates annual changes in the overall number of authorized personnel positions since fiscal year 2005. Both military and civilian positions remained relatively unchanged until fiscal year 2012, when, according to Joint Staff officials, U.S. Joint Forces Command was disestablished and some of its responsibilities and personnel were moved to the Joint Staff. According to documentation and interviews with Joint Staff officials, of those positions acquired by the Joint Staff in fiscal years 2012 and retained in 2013, most of the military positions (415 authorized positions) and civilian positions (690 authorized positions) are stationed at Hampton Roads, Virginia, to manage and support the Combatant Command Exercise Engagement and Training Transformation program reassigned to the Joint Staff when U.S. Joint Forces Command was disestablished. Figure 8 shows the changes in headquarters support costs for the Joint Staff for fiscal year 2003 through fiscal year 2013. The increase in overall headquarters support costs from fiscal years 2011 through 2013 was, according to Joint Staff officials, due to the previously mentioned influx of civilian personnel to the Joint Staff from U.S. Joint Forces Command following its disestablishment in fiscal year 2011. The Office of the Secretary of the Army has sole responsibility within the Office of the Secretary and the Army Staff for the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Army Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Army. Headquarters functions to be performed by the Army Staff include, among others, recruiting, organizing, training, and equipping of the Army.the Secretary of the Army and the Army Staff comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Army, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 4 shows the organizational structure and composition of the Army Secretariat and Staff for fiscal year 2013, including both authorized military and civilian positions, as well as estimated contractor full-time equivalents. The Office of the Secretary of the Navy is solely responsible among the Office of the Secretary of the Navy, the Office of the Chief of Naval Operations, and the Headquarters, Marine Corps, for oversight of the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. The Office of the Chief of Naval Operations is to provide professional assistance to the Secretary and Chief of Naval Operations in preparing for the employment of the Navy in areas such as: recruiting, organizing, supplying, equipping, and training. The staffs of Office of the Secretary of the Navy and the Office of the Chief of Naval Operations comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Navy, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 5 shows the organizational structure and composition of the Navy Secretariat and Office of the Chief of Naval Operations for fiscal year 2013, including both authorized military and civilian positions. Figure 13 illustrates annual changes in the number of authorized military and civilian positions within the Navy Secretariat since fiscal year 2003. From fiscal years 2003 through 2008, the total number of authorized positions within the secretariat decreased from fiscal year 2003 to 2004 and remained relatively constant through fiscal year 2008 due to reductions in its baseline budget, recalculation of civilian pay and benefits, and internal reorganizations within the Navy, according to officials within the Navy Secretariat. From fiscal years 2009 through 2013, authorized civilian positions within the Navy Secretariat have steadily increased. Navy Secretariat officials attributed this increase primarily to reorganization of functions across the Department of the Navy that moved positions into the secretariat and the conversion of contracted services to civilian positions. Headquarters support costs for the Navy Secretariat have generally increased from fiscal years 2001 through 2013, as seen in the inset of figure 14. According to Navy officials, significant drivers of this overall increase include continued increases in civilian personnel costs, and additional contracted services costs to support both a 2005 DOD initiative and compliance in fiscal years 2011 and 2012 with congressional direction to improve the auditability of its financial statements. Figure 15 illustrates annual changes in the number of authorized military and civilian positions within the Office of the Chief of Naval Operations since fiscal year 2005. The Office of the Chief of Naval Operations has experienced some increase in authorized civilian positions over that period, which Navy officials attributed to conversion of contracted services to civilian positions and reorganizations of the Office of the Chief of Naval Operations under new Chiefs of Naval Operations. Our analysis shows that much of the overall increase in authorized civilian positions at the Office of the Chief of Naval Operations was offset by decreases in military positions since fiscal year 2010. Headquarters support costs for the Office of the Chief of Naval Operations have generally decreased from fiscal years 2009 through 2013, as seen in the inset of figure 16. According to Office of the Chief of Naval Operations’ officials, the decrease in costs in fiscal 2010 was the result of the removal of some centrally managed costs from the Office of the Chief of Naval Operations budget in 2010 and efforts to convert contracted services to civilian positions. As seen in figure 16, civilian personnel costs have increased over the period, which Office of the Chief of Naval Operations’ officials attributed to the conversion of contracted services to civilian positions and organizational restructuring that moved additional civilian positions to the Office of the Chief of Naval Operations headquarters staff, resulting in higher civilian personnel costs. The Marine Corps also operates under the authority, direction, and control of the Secretary of the Navy. Headquarters, Marine Corps, consists of the Commandant of the Marine Corps and staff who are to provide assistance in preparing for the employment of the Marine Corps in areas such as recruiting, organizing, supplying, equipping, and training. The staff of Headquarters, Marine Corps, comprises military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Marine Corps, as well as the changes in these resources from fiscal year 2005 through fiscal year 2013. Table 6 shows the organizational structure and composition of Headquarters, Marine Corps, for fiscal year 2013, including both authorized military and civilian positions. Headquarters, Marine Corps, experienced an increase in its overall number of authorized military and civilian positions from fiscal years 2005 to 2013, as shown in figure 17, but there have been variations within those years. Headquarters, Marine Corps, officials attributed some of the increases in authorized positions to the conversion of military positions to civilian positions, and additional personnel requirements needed to support the Foreign Counterintelligence Program and National Intelligence Program and to stand up and operate the National Museum of the Marine Corps. Headquarters, Marine Corps, officials also explained that some of the decreases in authorized positions were due to a number of organizational realignments that transferred civilian positions from Headquarters, Marine Corps, to operational or field support organizations. From fiscal years 2005 through 2013, the total headquarters support costs for Headquarters, Marine Corps, have slightly increased, as seen in the inset in figure 18, but there has been variation in total costs year-to- year, and costs are down from their peak in fiscal year 2012. As seen in figure 18, there has been a consistent increase in costs for civilian personnel from fiscal year 2005 through fiscal year 2012, which the Marine Corps attributed to the conversion of military positions to civilian positions, organizational realignments that moved civilian positions to Headquarters, Marine Corps, and recalculation of civilian pay and benefits, all of which increased costs for civilian personnel. From fiscal years 2005 through 2013, other headquarters support costs generally decreased due to transfers and realignment of resources from Headquarters, Marine Corps, to other organizations and operating forces. The Office of the Secretary of the Air Force has sole responsibility and oversight for the following functions across the Air Force: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Air Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Air Force. The headquarters functions to be performed by the Air Staff include recruiting, organizing, training, and The staffs of Office of the equipping of the Air Force, among others.Secretary of the Air Force and the Air Staff comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Air Force, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 7 shows the organizational structure and composition of the Air Force Secretariat and Staff for fiscal year 2013, including both authorized military and civilian positions. Figure 19 illustrates annual changes in the number of authorized positions in the Office of the Secretary of the Air Force since fiscal year 2001. The number of authorized military and civilian positions remained relatively unchanged until fiscal year 2010 when, according to Air Force officials, the conversion of contracted services to civilian positions and the conversion of military to civilian positions contributed to the increasing number of authorized civilian personnel. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Air Force officials stated that authorized positions within the secretariat have gradually decreased from peak levels reached in fiscal year 2010 due to direction from the Secretary of Defense to hold the number of civilian positions at or below fiscal year 2010 levels and to cut civilian positions that had yet to be filled after they had converted contracted services to civilian positions in previous years. Figure 20 illustrates annual changes in the number of authorized positions in the Office of the Chief of Staff of the Air Force since fiscal year 2001. The total number of authorized military and civilian positions remained relatively stable until fiscal year 2006, when the number of authorized military personnel reached its peak level. Since then, the number of authorized civilian personnel has generally increased, which an Air Force official said was mainly due to the conversion of contracted services to civilian positions and the conversion of military to civilian positions, although these numbers have begun to decline since fiscal year 2011. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Figure 21 shows the changes associated with Air Force Secretariat and Air Staff headquarters support costs for fiscal year 2001 through fiscal year 2013. According to Air Force officials, the dramatic increase in civilian personnel costs in fiscal year 2010 was driven by the conversion of contracted services to civilian positions, resulting in higher costs for civilian personnel. The subsequent drop in civilian personnel costs was primarily due to restraints placed on the growth in the number of civilian positions by Secretary Gates in fiscal year 2010 and the Budget Control Act of 2011. According to an Air Force official, the rapid spike in other support costs in fiscal year 2012 was primarily due to the costs for a civil engineering project billed to the Air Force Secretariat and Staff for renovating the Air Force Headquarters space in the Pentagon. In addition to the contact named above, Richard K. Geiger (Assistant Director), Tracy Barnes, Gabrielle A. Carrington, Neil Feldman, David Keefer, Carol D. Petersen, Bethann E. Ritter Snyder, Michael Silver, Amie Steele, and Cheryl Weissman made key contributions to this report.
Facing budget pressures, DOD is seeking to reduce headquarters activities of OSD, the Joint Staff, and the military services' secretariats and staffs, which primarily perform policy and management functions. GAO was mandated to review personnel resources devoted to these headquarters organizations from fiscal years 2001 through 2013. This report (1) identifies past trends in personnel resources for these organizations and any plans for reductions; and (2) evaluates the extent to which DOD determines and reassesses personnel requirements for the organizations. GAO analyzed data on authorized military and civilian positions and contracted services from fiscal years 2001 through 2013. GAO reviewed DOD's headquarters reductions plans and processes for determining and reassessing personnel requirements. Over the past decade, authorized military and civilian positions have increased within the Department of Defense (DOD) headquarters organizations GAO reviewed—the Office of the Secretary of Defense (OSD), the Joint Staff, and the Army, Navy, Marine Corps, and Air Force secretariats and staffs—but the size of these organizations has recently leveled off or begun to decline, and DOD's plans for future reductions are not finalized. The increases varied by organization, and DOD officials told GAO that the increases were due to increased mission responsibilities, conversion of functions performed by contracted services to civilian positions, and institutional reorganizations. For example, authorized military and civilian positions for the Army Secretariat and Army Staff increased by 60 percent, from 2,272 in fiscal year 2001 to 3,639 in fiscal year 2013, but levels have declined since their peak of 3,712 authorized positions in fiscal year 2011. In addition to civilian and military personnel, DOD also relies on personnel performing contracted services. Since DOD is still in the process of compiling complete data on personnel performing contracted services, trends in these data could not be identified. In 2013, the Secretary of Defense set a target to reduce DOD components' headquarters budgets by 20 percent through fiscal year 2019, including costs for contracted services, while striving for a similar reduction to military and civilian personnel. However, DOD has not finalized plans to achieve these reductions. DOD was required to report to Congress by June 2014 on efforts to streamline management headquarters, but needed an extension until late summer 2014 for the report due to staff turnover. As of December 2014, DOD's plan had not been issued. GAO found that DOD headquarters organizations it reviewed do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess these requirements as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted in the 1980s and 1990s to force efficiencies and reduce duplication. However, these limits have been waived since fiscal year 2002. If the limits were in force in fiscal year 2013, the Army and Navy would exceed them by 17 percent and 74 percent, respectively. Moreover, the limits have little practical utility because of statutory exceptions for certain categories of personnel and because the limits exclude personnel in supporting organizations that perform headquarters-related functions. For example, the organizations that support the Army Secretariat and Army Staff are almost three times as large as the Secretariat and Staff, but personnel who perform headquarters-related functions in these organizations are excluded from the limits. All but one of the organizations GAO reviewed have recognized problems in their existing requirements-determination processes. The OSD, the Navy, and the Marine Corps are taking steps to modify their processes, but their efforts are not yet complete. Without a systematic determination of personnel requirements and periodic reassessment of them, DOD will not be well positioned to proactively identify efficiencies and limit personnel growth within these headquarters organizations. Moreover, until DOD determines personnel requirements, Congress will not have critical information needed to reexamine statutory limits enacted decades ago. GAO recommends that DOD (1) conduct a systematic determination of personnel requirements at these headquarters organizations; (2) submit the requirements to Congress with adjustments and recommended modifications to the statutory limits; and (3) periodically reassess personnel requirements within OSD and the military services' secretariats and staffs. Congress should consider using DOD's review of headquarters personnel requirements to reexamine existing statutory limits. DOD partially concurred, stating it will use its existing processes, but will investigate other methods to improve the determination and reporting of requirements. GAO believes the recommendations are still valid, as discussed in the report.
Any discussion of readiness measurement must start with SORTS. This automated system, which functions as the central listing for more than 9,000 military units, is the foundation of DOD’s unit readiness assessment process and is a primary source of information used for reviews at the joint and strategic levels. The system’s database indicates, at a selected point in time, the extent to which these units possess the required resources and training to undertake their wartime missions. Units regularly report this information using a rating system that comprises various indicators on the status of personnel, equipment, supplies, and training. SORTS is intended to enable the Joint Staff, the combatant commands, and the military services to, among other things, prepare lists of readily available units, assist in identifying or confirming major constraints on the employment of units, and confirm shortfalls and distribution problems with unit resources. Until the early 1990s, DOD defined “readiness” narrowly in terms of the ability of units to accomplish the missions for which they were designed, and SORTS was the only nonservice-specific system DOD had to measure readiness. Even today, SORTS remains an important component of readiness assessment in that data from the system is used extensively by the services to formulate a big-picture view of readiness. However, limitations to SORTS have been well documented for many years by various audit and oversight organizations. For example, prior reviews by our office and others have found: SORTS represents a snapshot in time and does not signal impending changes in readiness. SORTS relies on military judgment for certain ratings, including the commanders’ overall rating of unit readiness. In some cases, SORTS ratings reflect a higher or lower rating than the reported analytical measures support. However, DOD officials view subjectivity in SORTS reports as a strength because the commanders’ judgments provide professional military assessments of unit readiness. The officials also note that much of the information in the SORTS reports is objective and quantitative. The broad measurements that comprise SORTS ratings for resource availability may mislead managers because they are imprecise and therefore may mask underlying problems. For example, SORTS allows units to report the same capability rating for personnel strength even though their personnel strength may differ by 10 percent. SORTS data is maintained in multiple databases located at combatant commands, major commands, and service headquarters and is not synchronized across the databases. SORTS data may be out-of-date or nonexistent for some units registered in the database because reporting requirements are not enforced. Army SORTS procedures that require review of unit reports through the chain of command significantly delay the submission of SORTS data to the Joint Staff. DOD is taking actions to address some of these limitations. The Chairman of the Joint Chiefs of Staff was directed last year—in the Defense Planning Guidance—to develop a plan for improving DOD’s readiness assessment system. Although it has yet to be approved, the Joint Staff plan calls for a phased improvement to the readiness assessment system, starting with upgrades to SORTS. During the first phase of the plan, the Joint Staff is addressing technical limitations of SORTS. One of the objectives, for instance, is to ensure that the data is synchronized DOD-wide across multiple databases. Future phases of the Joint Staff plan would link SORTS with other databases in a common computer environment to make readiness information more readily accessible to decisionmakers. In addition, the Joint Staff plan calls for upgrades to SORTS that will make the system easier to use. Separate from the Joint Staff plan, the services are developing or implementing software to automate the process of entering SORTS data at the unit level. These technical upgrades are aimed at improving the timeliness and accuracy of the SORTS database and, therefore, are positive steps. They, however, will not address some of the inherent limitations to the system. For instance, the upgrades will not address the inability of the system to signal impending changes in readiness. In addition, the upgrades will not address the lack of precision in reporting unit resources and training. Another step DOD has taken to improve its readiness assessment capability is to institute a process known as the Joint Monthly Readiness Review. The joint review was initiated toward the end of 1994 and has matured over the last year or so. It represents DOD’s attempt to look beyond the traditional unit perspective provided by SORTS—although SORTS data continues to play an important role—and to introduce a joint component to readiness assessment. We believe the joint review process has several notable features. First, it brings together readiness assessments from a broad range of DOD organizations and elevates readiness concerns to senior military officials, including the Vice Chairman of the Joint Chiefs of Staff. Second, the joint review emphasizes current and near-term readiness and incorporates wartime scenarios based on actual war plans and existing resources. Third, it adds a joint perspective by incorporating readiness assessments from the combatant commands. The services and combat support agencies also conduct readiness assessments for the joint review. Fourth, the joint review is conducted on a recurring cycle—four times a year—that has helped to institutionalize the process of readiness assessment within DOD.Finally, the joint review includes procedures for tracking and addressing reported deficiencies. I would like to note, however, that the DOD components participating in the review are accorded flexibility in how they conduct their assessments. The 11 combatant commands, for instance, assess readiness in eight separate functional areas, such as mobility, infrastructure, and intelligence, surveillance, and reconnaissance, and to do this each command has been allowed to independently develop its own measures. In addition, the process depends heavily on the judgment of military commanders to formulate their assessment. Officials involved with the joint review view this subjectivity as a strength, not a weakness, of the process. They said readiness assessment is influenced by many factors, not all of which are readily measured by objective indicators. One consequence, however, is that the joint review cannot be used to make direct comparisons among the commands in the eight functional areas. We should also point out that the services, in conducting their portion of the joint review, depend extensively on SORTS data. As I mentioned earlier, SORTS has certain inherent limitations. DOD is required under 10 U.S.C. 482 to prepare a quarterly readiness report to Congress. Under this law, DOD must specifically describe (1) each readiness problem and deficiency identified, (2) planned remedial actions, and (3) the key indicators and other relevant information related to each identified problem and deficiency. In mandating the report, Congress hoped to enhance its oversight of military readiness. The first report was submitted to Congress in May 1996. DOD bases its quarterly reports on briefings to the Senior Readiness Oversight Council. The Council, comprising senior civilian and military leaders, meets monthly and is chaired by the Deputy Secretary of Defense. The briefings to the Council are summaries from the Joint Monthly Readiness Review. In addition, the Deputy Secretary of Defense periodically tasks the Joint Staff and the services to brief the Council on various readiness topics. From these briefings, the Joint Staff drafts the quarterly report. It is then reviewed within DOD before it is submitted to Congress. We recently reviewed several quarterly reports to determine whether they (1) accurately reflect readiness information briefed to the Council and (2) provide information needed for congressional oversight. Because minutes of the Council’s meetings are not maintained, we do not know what was actually discussed. Lacking such records, we traced information in the quarterly readiness reports to the briefing documents prepared for the Council. Our analysis showed that the quarterly reports accurately reflected information from these briefings. In fact, the quarterly reports often described the issues using the same wording contained in the briefings to the Council. The briefings, as well as the quarterly reports, presented a highly aggregated view of readiness, focusing on generalized strategic concerns. They were not intended to and did not highlight problems at the individual combatant command or unit level. DOD officials offered this as an explanation for why visits to individual units may yield impressions of readiness that are not consistent with the quarterly reports. Our review also showed that the quarterly reports did not fulfill the legislative reporting requirements under 10 U.S.C. 482 because they lacked the specific detail on deficiencies and planned remedial actions needed for congressional oversight. Lacking such detail, the quarterly reports provided Congress with only a vague picture of DOD’s readiness problems. For example, one report stated that Army personnel readiness was a problem, but it did not provide data on the numbers of personnel or units involved. Further, the report did not discuss how the deficiency affected the overall readiness of the units involved. Also, the quarterly reports we reviewed did not specifically describe planned remedial actions. Rather, they discussed remedial actions only in general terms, with few specific details, and provided little insight into how DOD planned to correct the problems. Congress has taken steps recently to expand the quarterly reporting requirements in 10 U.S.C. 482. Beginning in October 1998, DOD will be required to incorporate 19 additional readiness indicators in the quarterly reports. To understand the rationale for these additional indicators, it may be helpful to review their history. In 1994, we told this Subcommittee that SORTS did not provide all the information that military officials believed was needed for a comprehensive assessment of readiness. We reported on 26 indicators that were not in SORTS but that military commanders said were important for a comprehensive assessment of readiness. We recommended that the Secretary of Defense direct his office to determine which indicators were most relevant to building a comprehensive readiness system, develop criteria to evaluate the selected indicators, prescribe how often the indicators should be reported to supplement SORTS data, and ensure that comparable data be maintained by the services to facilitate trend analysis. DOD contracted the Logistics Management Institute (LMI) to study the indicators discussed in our report, and LMI found that 19 of them could be of high or medium value for monitoring critical aspects of readiness. The LMI study, issued in 1994, recommended that DOD (1) identify and assess other potential indicators of readiness, (2) determine the availability of data to monitor indicators selected, and (3) estimate benchmarks to assess the indicators. Although our study and the LMI study concluded that a broader range of readiness indicators was needed, both left open how DOD could best integrate additional measures into its readiness reporting. The 19 indicators that Congress is requiring DOD to include in its quarterly reports are very similar to those assessed in the LMI study. (See app. 1 for a list of the 19 indicators DOD is to include in the quarterly reports.) Last month, DOD provided Congress with an implementation plan for meeting the expanded reporting requirements for the quarterly report. We were asked to comment on this plan today. Of course, a thorough assessment of the additional readiness indicators will have to wait until DOD begins to incorporate them into the quarterly reports in October 1998. However, on the basis of our review of the implementation plan, we have several observations to make. Overall, the implementation plan could be enhanced if it identified the specific information to be provided and the analysis to be included. The plan appears to take a step backward from previous efforts to identify useful readiness indicators. In particular, the LMI study and subsequent efforts by the Office of the Secretary of Defense were more ambitious attempts to identify potentially useful readiness indicators for understanding, forecasting, and preventing readiness shortfalls. The current implementation plan, in contrast, was developed under the explicit assumption that existing data sources would be used and that no new reporting requirements would be created for personnel in the field. Further, the plan states that DOD will not provide data for 7 of the 19 indicators because either the data is already provided to Congress through other documents or there is no reasonable or accepted measurement. DOD officials, however, acknowledged that their plans will continue to evolve and said they will continue to work with this Subcommittee to ensure the quarterly report supports congressional oversight needs. Lastly, the plan does not present a clear picture of how the additional indicators will be incorporated into the quarterly report. For example, the plan is mostly silent on the nature and extent of analysis to be included and on the format for displaying the additional indicators. We also have concerns about how DOD plans to report specific indicators. For example: According to the plan, SORTS will be the source of data for 4 of the 19 indicators—personnel status, equipment availability, unit training and proficiency, and prepositioned equipment. By relying on SORTS, DOD may miss opportunities to provide a more comprehensive picture of readiness. For example, the LMI study points out that SORTS captures data only on major weapon systems and other critical equipment. That study found value in monitoring the availability of equipment not reported through SORTS. In all, the LMI study identified more than 100 potential data sources outside SORTS for 3 of these 4 indicators—personnel status, equipment availability, and unit training and proficiency. (The LMI study did not include prepositioned equipment as a separate indicator.) DOD states in its implementation plan that 2 of the 19 indicators— operations tempo (OPTEMPO) and training funding—are not relevant indicators of readiness. DOD states further it will not include the data in its quarterly readiness reports because this data is provided to Congress in budget documents,. However, the LMI study rated these two indicators as having a high value for monitoring readiness. The study stated, for instance, that “programmed OPTEMPO is a primary means of influencing multiple aspects of mid-term readiness” and that “a system for tracking the programming, budgeting, and execution of OPTEMPO would be a valuable management tool that may help to relate resources to readiness.” For the indicator showing equipment that is non-mission capable, the plan states that the percentage of equipment reported as non-mission capable for maintenance and non-mission capable for supply will provide insights into how parts availability, maintenance shortfalls, or funding shortfalls may be affecting equipment readiness. According to the plan, this data will be evaluated by examining current non-mission capable levels versus the unit standards. While this type of analysis could indicate a potential readiness problem if non-mission capable rates are increasing, it will not show why these rates are increasing. Thus, insights into equipment readiness will be limited. Mr. Chairman, there are two areas where we think DOD has an opportunity to take further actions to improve its readiness reporting. The first area concerns the level of detail included in the quarterly readiness reports to Congress. In a draft report we will issue later this month, we have recommended that the Secretary of Defense take steps to better fulfill the legislative reporting requirements under 10 U.S.C. 482 by providing (1) supporting data on key readiness deficiencies and (2) specific information on planned remedial actions in its quarterly readiness reports. As we discussed earlier, the quarterly reports we reviewed gave Congress only a vague picture of readiness. Adding more specific detail should enhance the effectiveness of the reports as a congressional oversight tool. DOD has concurred with our recommendation. The second area where DOD can improve its readiness reporting concerns DOD’s plan to include additional readiness indicators in the quarterly report. The plan would benefit from the following changes: Include all 19 required indicators in the report. Make the report a stand-alone document by including data for all the indicators rather than referring to previously reported data. Further investigate sources of data outside SORTS, such as those reviewed in the LMI report, that could provide insight into the 19 readiness indicators. Develop a sample format showing how the 19 indicators will be displayed in the quarterly report. Provide further information on the nature and extent of analysis to be included with the indicators. DOD recognizes in its plan that the type and quality of information included in the quarterly reports may not meet congressional expectations and will likely evolve over time. In our view, it would make sense for DOD to correct known shortcomings to the current implementation plan and present an updated implementation plan to Congress prior to October 1998. Mr. Chairman, that concludes my prepared statement. We would be glad to respond to any questions you or other Members of the Subcommittee may have. The following are the additional indicators the Department of Defense is required, under 10 U.S.C. 482, to include in its quarterly reports to Congress beginning in October 1998. 1. Personnel status, including the extent to which members of the armed forces are serving in positions outside of their military occupational specialty, serving in grades other than the grades for which they are qualified, or both. 2. Historical data and projected trends in personnel strength and status. 3. Recruit quality. 4. Borrowed manpower. 5. Personnel stability. 6. Personnel morale. 7. Recruiting status. 8. Training unit readiness and proficiency. 9. Operations tempo. 10. Training funding. 11. Training commitments and deployments. 12. Deployed equipment. 13. Equipment availability. 14. Equipment that is not mission capable. 15. Age of equipment. 16. Condition of nonpacing items. 17. Maintenance backlog. 18. Availability of ordnance and spares. 19. Status of prepositioned equipment. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Defense's (DOD) process for assessing and reporting on military readiness, focusing on: (1) what corrective action DOD has taken to improve its readiness assessment system; (2) whether military readiness reports provided quarterly to Congress effectively support congressional oversight; and (3) whether further improvements are needed to DOD's process. GAO noted that: (1) over the last few years, DOD has taken action to improve readiness assessment; (2) DOD has made technical enhancements to the Status of Resources and Training System (SORTS)--the automated system it uses to assess readiness at the unit level; (3) DOD also has established two forums--the Joint Monthly Readiness Review and the Senior Readiness Oversight Council--for evaluating readiness from a joint and strategic perspective; (4) however, SORTS remains the basic building block for readiness assessment, and inherent limitations to this system, such as its inability to signal impending changes in readiness and its imprecise ratings for unit resources and training, may be reflected in reviews at the joint and strategic levels; (5) DOD's quarterly reports to Congress, which are based on information provided to the Senior Readiness Oversight Council, provide only a vague description of readiness deficiencies and planned remedial actions; consequently, in their present form they are not as effective as they could be as a congressional oversight tool; (6) DOD is required to expand on these reports beginning in October 1998 by adding indicators mandated by Congress; (7) GAO has concerns about DOD's current plans for implementing this expanded reporting requirement; (8) for example, current plans do not present a clear picture of how the additional readiness will be incorporated into the quarterly report; (9) GAO's work has identified two areas in which DOD can improve its readiness reporting to Congress; (10) DOD should provide more specific descriptions and supporting information for the key readiness deficiencies and planned remedial actions identified in its quarterly report; and (11) DOD can make improvements to its current plans for adding readiness indicators to the quarterly report.
We at GAO use the term “human capital” because—in contrast to traditional terms such as personnel and human resource management—it focuses on two principles that are critical in a modern, results-oriented management environment. First, people are assets whose value can be enhanced through investment. As the value of people increases, so does the performance capacity of the organization and therefore its value to clients and other stakeholders. As with any investment, the goal is to maximize value while managing risk. Second, an organization’s human capital approaches must be aligned to support the mission, vision for the future, core values, goals and objectives, and strategies by which the organization has defined its direction and its expectations for itself and its people. An organization’s human capital policies and practices should be designed, implemented, and assessed by the standard of how well they help the organization pursue these intents and achieve related results. It is clear that, in many government entities and functional areas such as information technology and acquisitions, the transition to modern, results- oriented management—and along with it, to strategic human capital management—will require a cultural transformation. Hierarchical management approaches will need to yield to partnerial approaches. Process-oriented ways of doing business will need to yield to results- oriented ones. And siloed organizations will need to become integrated organizations if they expect to make the most of the knowledge and skills of their people. Government entities that expect to ensure accountability for performance and make the best use of their human capital will need to build a solid foundation in strategic planning and organizational alignment, leadership and succession planning, recruiting and training the best possible talent, and creating a strong performance culture—including appropriate performance measures and rewards and a focus on continuous learning and knowledge management. are common to high-performing organizations. (See attachment I.) We have used the checklist’s assessment framework to guide our recent inquiries into human capital issues across the federal government and at specific agencies, some of which are using the framework in their human capital planning efforts. We have also used this framework to assess and guide our own internal GAO efforts. High-performing organizations in the private and public sectors have long understood the relationship between effective “people management” and organizational success. However, the federal government, which has often acted as if federal employees were costs to be cut rather than assets to be valued, has only recently received its wake-up call. As our January 2001 Performance and Accountability Series reports made clear, serious federal human capital shortfalls are now eroding the ability of many federal agencies—and threatening the ability of others—to economically, efficiently, and effectively perform their missions. Agencies’ strategic human capital management challenges involve such key areas as strategic human capital planning and organizational alignment; leadership continuity and succession planning; acquiring and developing staffs whose size, skills, and deployment meet agency needs; and creating results- oriented organizational cultures. Attachment II provides examples of the federal government’s pervasive human capital challenges, from military recruitment shortfalls at the Department of Defense to staff and skills losses at the National Aeronautics and Space Administration to inadequate workforce planning at the Environmental Protection Agency. responsibilities and deliver on its promises. After a decade of government downsizing and curtailed investments in people, it is becoming increasingly clear that today’s federal human capital strategies are not appropriately constituted to meet the current and emerging needs of the nation’s government and its citizens. The federal government’s approach to people management includes a range of outmoded attitudes, policies, and practices that warrant serious and sustained attention. To view federal employees as costs to be cut rather than as assets to be valued would be to take a narrow and shortsighted view—one that is obsolete and must be changed. Ever since we added strategic human capital management to our high-risk list, we have been asked what would need to happen for it to be removed. Clearly, we will need to see measurable and sustainable improvements in the economy, efficiency, and effectiveness with which the government as a whole and the individual agencies manage their workforces to achieve their missions and goals. I believe that congressional hearings such as today’s demonstrate that the momentum for these improvements is building, but the process will undoubtedly take time. Clearly, there is very little time to waste. Changes in the demographics of the federal workforce, in the education and skills required of its workers, and in employment structures and arrangements are all continuing to unfold. The federal workforce is aging; the baby boomers, with their valuable skills and experience, are drawing nearer to retirement; new employees joining the federal workforce today have different employment options and different career expectations from the generation that preceded them. In response to an increasingly competitive job market, federal agencies will need the tools and flexibilities to attract, retain, and motivate top-flight talent. More and more, the work that federal agencies do requires a knowledge-based workforce that is sophisticated in new technologies, flexible, and open to continuous learning. This workforce must be adept both at delivering services directly and at effectively managing the cost and quality of services delivered by third parties on the government’s behalf. Agencies’ employment structures and working arrangements will also be changing, and the workplace will need to accommodate a greater mix of full-time, part-time, and temporary workers; more contracting-out; less job security; and the possibilities of additional government downsizing and realignments. elements such as hiring, staffing, compensation, promotions, training and development, and performance management all need to be aligned with organizational missions and goals, and must be approached as interrelated parts of a coherent human capital management strategy. Other elements must also be considered. In the information area in particular, other key elements will include sourcing, contract oversight, knowledge management, and systems development. Overall, and in critical occupational areas, agencies can and must take the initiative to be more competitive in attracting new employees with needed skills; design and implement modern, effective and credible performance evaluation systems; create the kinds of performance incentives and training programs that motivate and empower employees; and build labor-management relationships that are based on common interests and the public trust. To shape human capital strategies that support their specific needs and circumstances, agencies must give strategic human capital management the enhanced and sustained attention it deserves, modernize their existing human capital policies and practices, and identify and make use of the tools and flexibilities available to them under current law. To address the federal government’s human capital challenges as a whole, we believe a three-stage approach is appropriate. First, agencies must take all administrative steps available to them under current laws and regulations to manage their people for results. Much of what agencies need to accomplish by way of focusing on human capital management is already available to them. They will, however, need the sustained commitment from top management and the support from both the Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) to make the most of their existing authorities. Second, the Administration and the Congress should pursue selected legislative opportunities to put new tools and flexibilities in place that will help agencies attract, retain, and motivate employees—both overall and, especially, in connection with critical occupations such as those in IT. Third, all interested parties should work together to determine the nature and extent of more comprehensive human capital (or civil service) reforms that should be enacted over time. These reforms should include greater emphasis on skills, knowledge, and performance in connection with federal employment and compensation decisions, rather than the passage of time and rate of inflation, as is often the case today. Mr. Chairman, as is clear from the array of witnesses you have gathered for today’s hearing, addressing the federal government’s human capital challenges is a responsibility shared by many parties. This includes the President, department and agency leaders, OMB, OPM, the Congress, the private sector, foundations and academia, and even the press. (See attachment III). As I have noted elsewhere, strategic human capital management has yet to find the broad conceptual acceptance or political consensus needed for comprehensive legislative reform to occur. In this sense, human capital remains the missing link in the framework of federal management reforms enacted by the Congress over the past decade— reforms that addressed such essential elements of high performing organizations as financial management, information technology management, and results-oriented goal-setting and performance measurement. However, I believe that the day is approaching when we will see comprehensive federal human capital legislative reform. The essential ingredients for progress in this area are leadership, vision, commitment, persistence, communications, and accountability. Notably, OPM and OMB have taken steps in the past year to help raise awareness of the federal government’s human capital challenges and to encourage and enable agencies to make progress in this area. For example, OPM has begun stressing to agencies the importance of integrating strategic human capital management with agency planning and has also been focusing more attention on developing tools to help agencies, such as new Senior Executive Service performance standards and a workforce planning model with associated Web-based research tools. Some of OPM’s efforts have been directed specifically at addressing human capital challenges in the information technology area. For example, in January 2001, OPM created a new special-rate authority to boost the pay of approximately 33,000 current federal information technology workers covered by the General Schedule (GS) at grades GS-5, 7, 9, 11, and 12. Both current and new federal employees are covered by the new pay rates. Further, OPM has issued a new “job family” classification standard for IT- related positions that revises and updates the previous standard and incorporates many formerly separate IT-related occupations into one. The new special pay rates and classification standard are intended to give agencies more flexibility in their IT-related recruiting and retention efforts. key elements of the President’s Management and Performance Plan, along with budget and performance integration, expand competitive sourcing, improving financial performance, and expanding e-government. OMB’s current guidance to agencies on preparing their strategic and annual performance plans states that the plans should set goals in such areas as recruitment, retention, and training, among others. Further, early this year, OMB instructed agencies to submit a workforce analysis by June 29, 2001. Each agency’s analysis was to include summary information on the demographics of the agency’s permanent, seasonal, and temporary workforce; projected attrition and retirements; an evaluation of workforce skills; expected changes in the agency’s work; recruitment, training, and retention strategies being implemented; and barriers to maintaining a high- quality and diverse workforce. The information that agencies were to develop may prove useful in identifying human capital areas needing greater attention and, moreover serve as an important first step toward the development of agency-specific 5-year restructuring plans in the context of the agencies’ fiscal year 2003 budget requests and annual performance plans. in linking their human capital goals to meaningful performance measures or programmatic results. For example, agencies’ workforce planning efforts generally were not targeted toward specific agency programmatic outcomes. As agencies wrestle with human capital management, they face a significant challenge in the information management and technology area. The rapid pace of technological change and innovation in the current information age poses wide-ranging opportunities for improved information management and enhanced performance in achieving agency missions and goals. Investments in information technologies alone are expected to account for more than 40 percent of all capital investment in the United States by 2004. The federal government’s IT investment is conservatively estimated in fiscal year 2002 to be $44 billion—an increase in federal IT spending of 8.6 percent from fiscal year 2000. This investment is substantial and should provide opportunities and demonstrate real results for increasing productivity and decreasing cost. Already, we have over 1,300 electronic government initiatives under way throughout the federal government, covering a wide range of activities involving interaction with citizens, business, other governments, and employees. developing adequate capabilities for storing, retrieving, and when, appropriate, disposing of electronic records; providing a robust technical infrastructure guided by sound enterprise ensuring uniform service to the public using multiple methods of access to government services and process. Additionally, the rush to electronic government can lessen the emphasis on the critical human element. Agencies must overcome two basic challenges related to IT human capital—a shortage of skilled workers and the need to provide a broad range of related staff training and development. These are essential challenges to address so that that staff can effectively operate and maintain new e-government systems, adequately oversee related contractor support, and deliver responsive service to the public. Indeed, in our own study of public and private sector efforts to build effective Chief Information Officer (CIO) organizations, we found that leading organizations develop IT human capital strategies to assess their skill bases and recruit and retain staff who can effectively implement technology to meet business needs. Figure 1 provides an overview of a common strategy that organizations in our study used to secure human capital for information management. creation of publicly accessible on-line forms required by legislative or executive branch deadlines. Irrespective of the final decisions regarding what IT functions are performed by federal employees or contractors, agencies must have an adequate number of skilled IT professionals to oversee the cost, quality, and performance of IT contractors. It is also important to note that the IT human capital challenge is not just an issue unique to our government or nation alone. The Organization for Economic Co-operation and Development (OECD)—an international organization that studies how governments organize and manage the public sector and identifies emerging challenges that governments are likely to face—recently issued a report discussing the recurring problem of the lack of IT skills in the public sector. The report found that the lack of IT skills makes it impossible for some countries to develop technology in-house and establishes an imbalance in relations between purchasers and providers. Moreover, as in the United States, against the background of a very tight IT labor market and an ever-increasing demand for highly qualified staff, the report noted that the competitiveness of the public employer has to be visibly strengthened. Interestingly, solutions seem to vary according to the different traditions in OECD member countries and can include higher wages, differentiated pay systems, better knowledge management, and better human resources management. For example, many countries have undertaken knowledge management initiatives, including training of staff and collecting IT-related information in databases. To illustrate, the United Kingdom has set up a database on all high-profile public sector IT-enabled projects, including project descriptions as well as a list of people running these initiatives. The database is expected to allow existing resources to be incorporated in future projects. analysts, engineers, and scientists will almost double between 1998 and 2008 and the demand for computer programmers will increase by 30 percent during the same time period. While recent data indicate a slowing demand, the ability of the United States to meet this demand is still considered a problem. In April, the Information Technology Association of America (ITAA) released a study on the size of the private- sector IT workforce, the demand for qualified workers and the gap between the supply and demand. Among the study’s top findings were the following: Information technology employment remains at the forefront of the United States economy, directly accounting for approximately 7 percent of the nation’s total workforce. Over 10.4 million people in the United States are IT workers, an increase of 4 percent over the 10 million reported for last year. The demand for IT workers—while slowing—remains substantial, as employers attempt to fill over 900,000 new IT jobs in 2001. For example, the demand for skilled IT workers by large IT firms has doubled over the year 2000 figure. However, ITAA has noted that overall demand for IT workers is down 44 percent from last year’s forecast, attributable in part, to the slowdown in the high tech sector and the economy in general. Still, the drop does not reflect a fall-off in IT employment, which will increase year to year. The talent gap for IT workers remains large. Hiring managers reported an anticipated shortfall of 425,000 IT workers because of a lack of applicants with the requisite technical and non-technical skills. winning new business and concentrating instead on rationalizing technology investments, tightening operations and making infrastructure improvements. As is apparent, the need for qualified IT professionals has placed the public sector in direct competition with the private sector for scarce resources. For the second consecutive year, federal CIOs have identified the need for skilled IT workers as their most critical issue. This is related to the stark reality that a substantial portion of the federal workforce will retire between fiscal years 1999 and 2006. We recently estimated that by 2006 about 31 percent of 24 major departments and agencies’ employees working in 1998 will be eligible to retire, and that through the end of 2006 about half of those eligible will actually retire. In the area of IT, all 24 major departments and agencies reported that they consider the occupation in the computer specialist series as mission-critical. We estimated that 30 percent of the employees in this series would be eligible to retire by the end of fiscal year 2006 and that 14 percent would retire by then. (See figure 2). effective IT education and training opportunities for the existing federal workforce. Among the Council’s initiatives is the support of the CIO University, which is a collaborative effort between the federal government and private institutions to develop IT executives and support of the Strategic and Tactical Advocates for Results (STAR) program. STAR is a graduate-level program designed to create an optimal learning environment for professionals. The Council also committed to reviewing and revising the CIO core competencies on a biennial basis. These competencies serve as a tool for determining IT skills, knowledge, and education requirements. To help better understand the magnitude of federal IT human capital issues and possible alternatives for new solutions, the CIO Council and the Administrative Office of the U.S. Courts asked the National Academy of Public Administration (NAPA) to study IT compensation strategies and to make recommendations on how the government can best compete for IT talent. NAPA has completed and reported on the first phase of this study. NAPA expects to complete its final report by mid-September. It will contain an evaluation of alternative compensation models and address recommended solutions. Table 1 summarizes NAPA’s overall comparison of compensation and work factors among various sectors, which demonstrates some of the similarities and differences among the sectors. NAPA’s high, medium, and low designations shown below are based on an overall evaluation of data and information obtained for organizations in each sector in comparison with the other sectors. mechanisms to evaluate progress in improving staff IT capabilities and therefore lacked the evaluation results that would have been used to continuously improve human capital strategies. The ramifications of the deficiencies in the agencies’ IT human capital management efforts are serious. Without complete assessments of IT skill needs, agencies will lack assurance that they have effectively identified the number of staff they will require with the specific knowledge and skills needed to sustain their current and future operations and developed strategies to fill these needs. Also, lacking an inventory of IT knowledge and skills, agencies will not have assurance that they are optimizing the use of current IT workforce nor have data on the extent of IT skill gaps. This information is necessary to developing effective workforce strategies and plans. Further, without analyzing and documenting the effectiveness of workforce strategies and plans, senior decisionmakers lack assurance that they are effectively addressing IT knowledge and skill gaps. At GAO, we have faced human capital challenges similar to those facing the federal government in general and the IT area specifically. However, we have made human capital management a top priority. We are undertaking a wide array of initiatives in this area and are investing considerable time, energy, and financial resources to make them work.The aim of these efforts is to enhance our performance and assure our accountability by attracting, retaining, and motivating a top-quality workforce, including staff in critical occupations such as IT. We have identified and made use of a variety of tools and flexibilities, some of which were made available to us through the GAO Personnel Act of 1980 and some through legislation passed by the Congress in 2000, but most of which are available across the broad spectrum of federal agencies. need for information technology professionals, but also for other skilled professionals such as accountants, statisticians, economists, and health care analysts. Further, we face a range of succession planning challenges. Specifically, by fiscal year 2004, 55 percent of our senior executives, 48 percent of our management-level analysts, and 34 percent of our analyst and related staff will be eligible for retirement. Moreover, at a time when a significant percentage of our workforce is nearing retirement age, marketplace, demographic, economic, and technological changes indicate that competition for skilled employees will be greater in the future, making the challenge of attracting and retaining talent even more complex. To address these challenges, we have taken numerous steps, all designed to support our strategic plan, which describes our role and mission in the federal government; our core values of accountability, integrity, and reliability that guide our work; the trends, conditions, and external factors underlying the plan; and our goals, objectives, and strategies for serving the Congress. From a human capital standpoint, our strategic plan and core values are our touchstones for designing, implementing, and evaluating our approaches to managing our people. These two vital elements will also be the foundation for our revised institutional and individual performance measurement and reward systems. In addition to laying the groundwork through strategic planning, in the fall of 2000 we realigned our mission-related functions at headquarters and the field to better support the Congress and prepare ourselves, with current and expected resource levels, to meet the future challenges outlined in our strategic plan. As with strategic planning, organizational alignment is crucial if an agency is to maximize its performance and assure its accountability. The choices that go into aligning an organization to support its strategic and programmatic goals have enormous implications for further decisions about human capital management, such as what kinds of leaders the agency should have and how it will best ensure leadership continuity, how skills needs will be identified and filled—particularly in critical occupations such as IT—and what strategies the agency will use to steer the organizational culture to maximize its results. We have taken many administrative steps to enhance the value of our human capital. (See figure 3). to learning, GAO plans to acquire a system that will maintain on-line individual development plans supported by competency-based learning paths and to support the development and delivery of Web-based learning with on-line testing and on-line course evaluations. Using the authority that the Congress provided in our 2000 legislation to create Senior Level positions to meet certain scientific, technical, and professional needs and to extend to those positions the rights and benefits of SES employees. One of the areas targeted was IT. We recently named four new Senior Level technical IT positions, and provided a few other specialists—such as our Chief Statistician and Chief Accountant—with new titles and SES-equivalent benefits. The authority to create Senior Level positions in certain critical areas reflects a specific need we identified and to which the Congress responded. As we assessed GAO’s human capital challenges at the start of the new century—including those related specifically to the IT area—we recognized that our preexisting personnel authorities would not let us address these challenges effectively. Therefore, using comprehensive workforce data that we had gathered and analyzed to make a coherent business case, we worked with the Congress last year to obtain several narrowly tailored flexibilities to help us reshape our workforce and establish the Senior Level technical positions. Along with the Senior Level positions, the legislation gave us additional tools to realign GAO’s workforce in light of overall budgetary constraints and mission needs; to correct skills imbalances; and to reduce high-grade, managerial, or supervisory positions without reducing the overall number of GAO employees. To address any or all of these three situations, we now have authority to offer voluntary early retirement (VER) to a maximum of 10 percent of our employees each fiscal year until December 31, 2003. We also have the authority to offer voluntary separation incentive (VSI) payments to a maximum of 5 percent of our employees during each fiscal year until December 31, 2003. Further, in the case of a reduction-in-force (RIF), we have the authority to place a much greater emphasis in our decisionmaking on our employees’ knowledge, skills, and performance, while retaining veterans’ preference and length of service as factors to consider in connection with applicable RIFs. October 1, 2001 until January 3, 2002. We have largely limited our voluntary early retirement offers to organizational areas in which we do not expect to grow, while at the same time stepping up our efforts to recruit and retain employees in critical occupations such as those related to information technology. The development of agency regulations to cover VSIs and RIFs is still in progress. We have no plans to offer VSIs, nor do we intend to pursue any involuntary layoffs during this or the next fiscal year. We believe that three of the authorities provided in our 2000 legislation may have broader applicability for other agencies and are worth congressional consideration at this time. Authority to offer voluntary early retirement and voluntary separation incentives could give agencies additional flexibilities with which to realign their workforces; correct skills imbalances; and reduce high-grade, managerial, or supervisory positions without reducing their overall number of employees. Further, the authority to establish Senior Level positions could help agencies become more competitive in the job market, particularly in critical scientific, technical, or professional areas. Further, the Administration and the Congress should consider other legislative actions that would help federal employers address their human capital challenges. As demographics change, as the marketplace continues to evolve, we will continue to think strategically and proactively to identify areas in which new innovations would make good business sense. In this regard, we believe it is worth exploring selective legislative proposals to enhance the federal government’s ability to attract, retain, and motivate skilled employees, particularly in connection with critical occupations, on a governmentwide basis. In addition to the three items I just mentioned, the following represent areas in which opportunities exist to better equip federal employers to meet their human capital needs: Critical occupations. Although agencies generally have more hiring and pay flexibilities today than in the past, further innovations might be explored to help federal agencies recruit, retain, and reward employees in such critical fields as information technology, where there is severe competition with other sectors for talent. Recruiting funds. In order to help attract and retain employees, consideration should be given to authorizing agencies to use appropriated funds for selective recruiting, recognition, and team building activities. Professional development. To encourage federal employees in their professional development efforts, consideration should be given to authorizing agencies to use appropriated funds to pay for selected professional certifications, licensing, and professional association costs. Pay compression relief. Executive compensation is a serious challenge for federal agencies, which to an increasing extent must compete with other governmental organizations—and with not-for-profit and private sector organizations—to attract and retain executive talent. In this regard, the existing cap on SES pay has increased pay compression between the maximum and lower SES pay levels, resulting in an increasing number of federal executives at different levels of responsibility receiving identical salaries. Further, pay compression can create situations in which the difference between executive and nonexecutive pay is so small that the financial incentive for managers to apply for positions of greater responsibility may disappear. The Congress needs to address this increasing pay compression problem. It could do so, perhaps, by delinking federal executive compensation from congressional pay, or by raising the cap on executive performance bonuses. Cafeteria benefits. Federal employees could be provided with flexible benefits available to many private sector workers under Section 125 of the Internal Revenue Service Code. This would give federal employees the ability to pay for such things as childcare or eldercare with pre-tax rather than after-tax dollars. Frequent flyer miles. Employees who travel on government business should be allowed to keep their “frequent flyer” miles—a small benefit but one that private sector employers commonly provide their people as part of a mosaic of competitive employee benefits. Let’s face it, flying is not fun anymore. Allowing federal workers to keep these miles, as employees elsewhere can, is a small price to pay. In addition, federal agencies could still use gainsharing programs to reward employees and save the government travel costs. As you know, Mr. Chairman, there has already been some meaningful progress on this issue: Last week, the House Government Reform Committee approved a bill that would allow civil service employees to “retain for personal use promotional items received as a result of travel taken in the course of employment.” Phased retirement. It may be prudent to address some of the succession planning issues associated with the rise in retirement eligibilities by pursuing phased retirement approaches, whereby federal employees with needed skills could change from full-time to part-time employment and receive a portion of their federal pension while still earning pension credits. Fellowships. The Congress should explore greater flexibilities to allow federal agencies to enhance their skills mix by leveraging the expertise of private and not-for-profit sector employees through innovative fellowship programs, particularly in critical occupations. Through such fellowships, private and not-for-profit professionals could gain federal experience without fully disassociating themselves from their positions, while federal agencies could gain from the knowledge and expertise that these professionals would bring during their participation in the program. Obviously, appropriate steps would have to be taken to address any potential conflicts. This concept could also be used to allow federal workers to participate in fellowship programs with private and not-for- profit sector employers. The federal government spends about of $200 billion a year contracting for goods and services. We are concerned with having the right people with the right skills to successfully manage federal contracts. We all agree that dealing with this issue will not be easy. The government is facing ever- growing public demands for better and more economical delivery of products and services. At the same time, the ongoing technological revolution requires a workforce with new knowledge, skills, and abilities. And at the moment, agencies must address these challenges in an economy that makes it difficult to compete for people with the competencies needed to achieve and maintain high performance. This situation is aptly illustrated by the problems found in the growing area of acquiring services. Federal agencies spend billions of tax dollars each year to buy services ranging from clerical support and consulting services to information technology services, such as network support, to the management and operation of government facilities, such as national laboratories. Our work continues to show that some service procurements are not being done efficiently, putting taxpayer dollars at risk. In particular, agencies are not clearly defining their requirements, fully considering alternative solutions, performing vigorous price analyses, or adequately overseeing contractor performance. Further, it is becoming increasingly evident that agencies are at risk of not having enough of the right people with the right skills to manage service procurements. Consequently, a key question we face in the federal government is whether we have today, or will have tomorrow, the ability to acquire and manage the procurement of increasingly sophisticated services the government needs. The amount being spent on services is growing substantially. Last year alone, the federal government acquired more than $87 billion in services— a 24-percent increase in real terms from fiscal year 1990. In fact, government purchase of services now accounts for 43 percent of all federal contracting expenses—surpassing supplies and equipment as the largest component of federal contract spending. Another dimension to this issue is that federal agencies are increasingly contracting out for information technology services. The growth in service contracting has largely been driven by the government’s increased purchases of two types of services: information technology services, which increased from $3.7 billion in fiscal year 1990 to about $13.4 billion in fiscal year 2000, and professional, administrative, and management support services, which rose from $12.3 billion in fiscal year 1990 to $21.1 billion in fiscal year 2000. The increase in the use of service contracts coincided with a 21-percent decrease in the federal workforce, which fell from about 2.25 million employees as of September 1990 to 1.78 million employees as of September 2000. As federal spending and employment patterns were changing, changes were also occurring in the way that federal agencies buy services. Specifically, there has been a trend toward agencies purchasing professional services using contracts awarded and managed by other agencies. For example, in 1996, the General Services Administration (GSA) began offering information technology services under its Federal Supply Schedule program, and it now offers services ranging from professional engineering to laboratory testing and analysis to temporary clerical and professional support services. The use of the schedule program to acquire services has increased significantly over the past several years. Other governmentwide contracts have also come into use in recent years. The Federal Acquisition Streamlining Act of 1994 authorized federal agencies to enter into multiple-award, task- and delivery-order contracts for goods and services. These contracts provide agencies with a great deal of flexibility in buying goods or services while minimizing the burden on government contracting personnel to negotiate and administer contracts. The Clinger-Cohen Act of 1996 authorized the use of multiagency contracts and what have become known as governmentwide agency contracts to facilitate purchases of information technology-related products and services such as network maintenance and technical support, systems engineering, and integration services. While we have seen the environment change considerably, what we have not seen is a significant improvement in federal agencies’ management of service contracts. Simply stated, the poor management of service contracts undermines the government’s ability to obtain good value for the money spent. This contributed to our decision to designate contract management a high-risk area for the Departments of Defense and Energy, the two largest purchasers within the federal government. Improving contract management is also among the management challenges faced by other agencies. Compounding these problems are the agencies’ past inattention to strategic human capital management. As I noted earlier, we are concerned that federal agencies’ human capital problems are eroding the ability of many agencies—and threatening the ability of others—to perform their missions economically, efficiently, and effectively. For example, we found that the initial rounds of downsizing were set in motion without considering the longer-term effects on agencies’ performance capacity. Additionally, a number of individual agencies drastically reduced or froze their hiring efforts for extended periods. Consequently, following a decade of downsizing and curtailed investments in human capital, federal agencies currently face skills, knowledge, and experience imbalances that, without corrective action, will worsen, especially in light of the numbers of federal civilian workers becoming eligible to retire in the coming years. not oriented toward shaping the makeup of the force. Rather, DOD relied primarily on voluntary turnover and retirements, freezes on hiring authority, and its authority to offer early retirements and “buy-outs” to achieve reductions. While DOD had these kinds of tools available to manage its civilian downsizing and to mitigate the adverse effects of force reductions, its approach to civilian force reductions was not really oriented toward shaping the workforce for the future. In contrast, DOD did a much better job managing active-duty military force reductions because it followed a policy of trying to achieve and maintain a degree of balance between its accessions and losses in order to shape its forces with regard to rank, specialties, and years of service. As a result, DOD’s current civilian workforce is not balanced and therefore poses risks to the orderly transfer of institutional knowledge. According to DOD’s Acquisition 2005 Task Force, “After 11 consecutive years of downsizing, we face serious imbalances in the skills and experience of our highly talented and specialized civilian workforce,” putting DOD on the verge of a retirement-driven talent drain. DOD’s leadership had anticipated that using streamlined acquisition procedures would improve the efficiency of contracting operations and help offset the effects of workforce downsizing. However, the DOD Inspector General reported that the efficiency gains from using streamlined procedures had not kept pace with acquisition workforce reductions. The Inspector General reported that while the workforce had been reduced by half, DOD’s contracting workload had increased by about 12 percent and that senior personnel at 14 acquisition organizations believed that workforce reductions had led to problems such as less contractor oversight. OPM shows that while DOD downsized its workforce to a greater extent than the other agencies during the 1990s, both DOD and the other agencies will have about 27 percent of their current contracting officers eligible to retire through the end of fiscal year 2005. Consequently, without appropriate workforce planning, federal agencies could lose a significant portion of their contracting knowledge base. For further information regarding this testimony, please contact Victor S. Rezendes, Managing Director, Strategic Issues, on (202) 512-6806 or at rezendesv@gao.gov. For information specific to the information technology portion of this testimony, please contact David L. McClure, Director, Information Technology Management, on (202) 512-6240 or at mcclured@gao.gov. For further information specific to the acquisitions related portion of this testimony, please contact David E. Cooper, Director, Acquisition and Sourcing Management, on (202) 512-4841 or at cooperd@gao.gov. Individuals making key contributions to this testimony included Stephen Altman, Margaret Davis, Ralph Dawn, Gordon Lusby, and Joseph Santiago. Strategic Planning: Establish the agency’s mission, vision for the future, core values, goals and objectives, and strategies. Shared vision Human capital focus Organizational Alignment: Integrate human capital strategies with the agency’s core business practices. Improving workforce planning Integrating the “HR” function Leadership: Foster a committed leadership team and provide for reasonable continuity through succession planning. Defining leadership Building teamwork and communications Ensuring continuity Talent: Recruit, hire, develop, and retain employees with the skills needed for mission accomplishment. Recruiting and hiring Training and professional development Workforce deployment Compensation Employee-friendly workplace Performance Culture: Empower and motivate employees while ensuring accountability and fairness in the workplace. Human Capital: Building the Information Technology Workforce to Achieve Results Illustrative human capital challenges Organizational culture problems, including resistance from affected USDA agencies and employees, have hampered departmentwide reorganization and modernization efforts. Further, the nation’s food safety system, in which USDA plays a major role, continues to suffer from inconsistent oversight, poor coordination, and inefficient deployment of resources. Untrained and inexperienced staff hamper effective management of $3 billion in Indian trust funds. A lack of sufficient numbers of experienced staff with the right expertise limits the ability of Commerce and two other trade agencies to monitor and enforce trade agreements. In the past two years, the military services have struggled to meet recruiting goals. Attrition among first-time enlistees has reached an all-time high. The services face shortages among junior officers, and problems in retaining a range of uniformed personnel, including intelligence analysts, computer programmers, and pilots. On the civilian side, skills and experience imbalances following downsizing are jeopardizing acquisitions and logistics capabilities. Headquarters and field staff have lacked contract management skills to oversee large projects, such as the cleanup of radioactive and hazardous waste sites. EPA has not yet implemented any systematic means of determining the right size, skills needs, or deployment of its workforce to carry out its mission and achieve its strategic goals and objectives, despite the demand for new skills due to technological changes and the shift in EPA’s regional environmental responsibilities to the states, as well as growing retirement eligibilities in its workforce. In major acquisition projects, FAA has lacked technical expertise to address vital project issues. Human Capital: Building the Information Technology Workforce to Achieve Results Human capital challenges Medicare’s leadership problems include the lack of any official whose sole responsibility it is to run the program. Further, frequent leadership changes at CMS have hampered long- term Medicare initiatives and the pursuit of a consistent management strategy. CMS’ workforce lacks skills needed to meet recent legislative requirements. The mismatch between CMS’ administrative capacity and its mandate could leave Medicare unprepared to handle future population growth and medical technology advances. As HUD’s reorganization moves into its final phases, workload imbalances pose programmatic challenges to several specialty centers and field offices. Single family mortgage insurance programs administered by HUD’s Federal Housing Administration have been marked by a number of human capital challenges, including insufficient staff. Further, insufficient or inexperienced staff led to problems in quality assurance reviews for 203(k) home rehabilitation loans and oversight of appraisers and mortgage lenders. Lack of staff to perform intelligence functions and unclear guidance for retrieving and analyzing information hamper efforts to combat the growing problem of alien smuggling. Difficulties replacing experienced fire personnel threaten firefighting capabilities during catastrophic events. IRS lacks reliable cost and operational information to measure the effectiveness of its tax collection and enforcement programs and to judge whether it is appropriately allocating its staff resources among competing management priorities. Staff and skills losses following downsizing pose potentially serious problems for the safety and planned flight rate of the space shuttle. Historically, the Park Service’s decentralized priority-setting and accountability systems left it without the means to monitor progress toward achieving its goals or hold park managers accountable for the results of park operations. The park concessions program continues to face management problems, including inadequate qualifications and training of the agency’s concession specialists and concessions contracting staff. Insufficient fire safety training has contributed to fire safety risks at visitor centers, hotels, and other national park buildings. Human Capital: Building the Information Technology Workforce to Achieve Results Human capital challenges NRC’s organizational culture is struggling with the agency’s new “risk-informed” regulatory approach. Further, NRC’s ability to maintain the skills needed to achieve its mission and fill the gaps created by growing retirement eligibilities could be threatened by the decline in university enrollments in nuclear engineering and other fields related to nuclear safety. Because the agency did not adequately link its contracting decisions to long-term strategic planning, it may not have the cost-effective mix of contractor and federal employees needed to meet future workload challenges. Further, PBGC employees who monitor contractors lack adequate guidance and policies essential to monitoring contractor performance. Increasing demand for services, imminent retirement of a large part of its workforce, changing customer expectations, and mixed success in past technology investments will challenge SSA’s ability to meet its service delivery demands, which include faster and more accurate benefit claims determinations and increased emphasis on returning the disabled to work. Issues related to the quality of life at overseas posts, career development opportunities, and talent management are hampering recruitment and retention of Foreign Service Officers. Efforts to determine the right size and composition of overseas posts have begun, but State faces challenges in aligning its workforce with new economic, political, security, and technological requirements. Also, staffing shortfalls are hampering counternarcotics programs and efforts to combat visa fraud. US Agency for International Development Staffing shortfalls in the procurement area have hampered the agency’s ability to initiate and monitor contracts, thus delaying reconstruction assistance in the wake of natural disasters in Central America and the Caribbean. A national nursing shortage could adversely affect VA’s efforts to improve patient safety in VA facilities and put veterans at risk. Further, VA’s training and recruitment programs may not be adequate to ensure a sufficient workforce of competent claims processors, which would likely undermine efforts to improve current problems of claims processing backlogs and errors.
This testimony discusses the federal government's strategic human capital management challenges, particularly in the information technology (IT) area. No management issue facing federal agencies could be more critical to the nation than their approach to attracting, retaining, and motivating people. Having enough people with the right mix of knowledge and skills will make the difference between success and failure. This is especially true in the information technology area, where widespread shortfalls in human capital have undermined agency and program performance. The federal government today faces pervasive human capital challenges that are eroding the ability of many agencies--and threatening the ability of others--to economically, efficiently, and effectively carry out their missions. How successfully the federal government acquires and uses information technology will depend on its ability to build, prepare, and manage its information technology workforce. To address the federal government's human capital challenges as a whole, GAO believes that (1) agencies must take all administrative steps available to them under current laws and regulations to manage their people for results; (2) the Administration and Congress should pursue opportunities to put new tools and flexibilities in place that will help agencies attract, retain, and motivate employees--both overall, and especially, in connection with critical occupations such as those in IT, and; (3) all interested parties should work together to determine the nature and extent of more comprehensive human capital (or civil service) reforms that should be enacted over time. These reforms should include greater emphasis on skills, knowledge, and performance in connection with federal employment and compensation decisions, rather than the passage of time and rate of inflation, as is often the case today.
In May 2009, the President announced the creation of a new Global Health Initiative (GHI) and proposed $63 billion in funding for all global health programs, including HIV/AIDS, malaria, tuberculosis, and maternal and child health, through 2014. According to the proposal, the majority of this funding—$51 billion, or 81 percent—is slated for global HIV/AIDS, tuberculosis, and malaria programs. For fiscal year 2009, State and USAID allocated about $7.3 billion for global health and child survival programs, including more than $5.6 billion for HIV/AIDS programs. For fiscal year 2010, State and USAID allocated approximately $7.8 billion for global health and child survival programs, including $5.7 billion for HIV/AIDS. For fiscal year 2011, the President proposed spending $8.5 billion on global health and child survival programs, including $5.9 billion for HIV/AIDS. In February 2010, the administration released a consultation document on GHI implementation, focusing on coordination and integration of global health programs, among other things, and setting targets for achieving health outcomes. The document also proposed selection of up to 20 countries—known as GHI Plus countries—that will receive additional funding and technical assistance under the GHI. Congress first authorized PEPFAR in 2003 and, in doing so, created within State a Coordinator of the U.S. Government Activities to Combat HIV/AIDS Globally, which State redesignated the Office of the U.S. Global AIDS Coordinator (OGAC). OGAC establishes overall PEPFAR policy and program strategies; coordinates PEPFAR programs; and allocates PEPFAR resources from the Global Health and Child Survival account to U.S. implementing agencies, including USAID and the Department of Health and Human Services’ (HHS) CDC. USAID and CDC also receive direct appropriations to support global HIV/AIDS and other global health programs, such as tuberculosis, malaria, and support for maternal and child health. In fiscal years 2004 through 2008—the first 5 years of PEPFAR—the U.S. government directed more than $18 billion to PEPFAR implementing agencies and the Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund). In 2008, Congress reauthorized PEPFAR at $48 billion to continue and expand U.S.-funded HIV/AIDS and other programs through fiscal year 2013. Although PEPFAR initially targeted 15 countries, known as focus countries, since its establishment PEPFAR has made significant investments in 31 partner countries and 3 regions. Representatives of PEPFAR implementing agencies (country teams) jointly develop country operational plans (COP) for the 15 focus countries and an additional 16 nonfocus countries, as well as regional operational plans (ROP) for three regions, to document U.S. investments in, and anticipated results of, U.S.- funded programs to combat HIV/AIDS. The country teams submit the operational plans to OGAC for review and ultimate approval by the U.S. Global AIDS Coordinator. As such, these operational plans serve as the basis for approving annual U.S. bilateral HIV/AIDS funding, notifying Congress, and allocating and tracking budgets and targets. Some nonfocus countries receiving U.S. HIV/AIDS funding do not submit a PEPFAR operational plan; OGAC reviews and approves HIV/AIDS-related foreign assistance funding through foreign assistance operational plans. Table 1 shows the countries and regions that received U.S. foreign assistance for HIV/AIDS programs in fiscal years 2001-2008. In 2009, UNAIDS estimated that $7 billion would be needed in developing countries in 2010 to reach HIV/AIDS treatment and care program targets, which are generally defined as 80 percent of the target population requiring treatment. Sub-Saharan Africa makes up about half (49 percent) of estimated needs for all HIV/AIDS programs in developing countries. UNAIDS’s estimate includes provision of ART, testing and counseling, treatment for opportunistic infections, nutritional support, laboratory testing, palliative care, and the cost of drug-supply logistics. The costs for CD4 blood tests are also included. In fiscal years 2006-09, PEPFAR funding for ART made up nearly half (46 percent) of PEPFAR’s approved budget for prevention, treatment, and care programs. (See fig. 1.) ART funding generally comprised treatment services (about 55 percent of approved treatment funding); ARV drug procurement (about 32 percent of approved treatment funding); and laboratory infrastructure (about 13 percent of approved treatment funding). In 2008, OGAC reported that tentative approval of generic ARV drugs had generated significant savings for PEPFAR. As of September 2010, HHS’s Food and Drug Administration had approved, or tentatively approved, 116 ARV formulations under its expedited review process, which allows all ARV drugs to be rapidly reviewed for quality standards and subsequently cleared for purchase under PEPFAR. According to PEPFAR’s Five-Year Strategy, released in December 2009, PEPFAR plans to provide direct support for more than 4 million people on ART, more than doubling the number of people directly supported on treatment during the first 5 years of PEPFAR. The strategy seeks to focus PEPFAR support on specific individuals requiring ART by prioritizing individuals with CD4 cell counts under 200/mm In addition, in countries with high coverage rates that are expanding eligibility for treatment, PEPFAR will provide technical assistance and support for the overall treatment infrastructure. PEPFAR also will expand efforts to better link testing and counseling with treatment and care and, in conjunction with its prevention of mother-to-child transmission programs, will support expanded treatment to pregnant women. As we have previously reported, federal financial standards call on agencies to use costing methods in their planning to determine resources needed to evaluate program performance, among other things. Program managers should use costing information to improve the efficiency of programs. In addition, such information can be used by Congress to make decisions about allocating financial resources, authorizing and modifying programs, and evaluating program performance. In 2008, we found that PEPFAR country teams identified and analyzed program costs in varying ways, and we recommended that the Secretary of State direct OGAC to provide guidance to PEPFAR country teams on using costing information in their planning and budgeting. irrespective of clinical symptoms. See Rapid Advice: Antiretroviral therapy for HIV Infection in Adults and Adolescents (Geneva: WHO, 2009), www.who.int/entity/hiv/pub/arv/rapid_advice_art.pdf. Overall, U.S. bilateral spending on global HIV/AIDS and other health programs generally increased in fiscal years 2001 through 2008, particularly for HIV/AIDS programs. From 2001 through 2003, U.S. bilateral spending on global HIV/AIDS rose, while spending on other global health programs dropped slightly. As would be expected given PEPFAR’s significant investment, from fiscal years 2004 through 2008, U.S. bilateral HIV/AIDS spending showed the greatest increase in PEPFAR focus countries, relative to nonfocus countries and regions with PEPFAR operational plans and other countries receiving HIV/AIDS assistance. In addition, our analysis determined that U.S. spending for other health- related health assistance also increased most for PEPFAR focus countries. Spending growth rates varied among three key regions—sub-Saharan Africa, Asia, and Latin America and the Caribbean—as did these regions’ shares of bilateral HIV/AIDS and other health spending following establishment of PEPFAR. (See app. II for additional information on U.S. bilateral foreign assistance spending on HIV/AIDS and other health programs in fiscal years 2001 through 2008.) Overall, U.S. bilateral foreign assistance spending on both global HIV/AIDS and other health programs increased in fiscal years 2001 through 2008. Although spending on other health programs decreased slightly from 2001 through 2003, U.S. spending on both HIV/AIDS and other health-related foreign assistance programs grew from 2004 through 2008, the first 5 years of PEPFAR. Annual growth in U.S. spending on global HIV/AIDS was more robust and consistent than annual growth for other global health spending (see table 2 and fig. 2). 2001-2003. Prior to the implementation of PEPFAR, U.S. bilateral spending on HIV/AIDS programs grew rapidly, while U.S. spending on other health programs fell slightly. HIV/AIDS. The U.S. government spent less on global HIV/AIDS programs than on other health-related programs in fiscal years 2001-2003. However, spending on HIV/AIDS grew rapidly prior to implementation of PEPFAR. Other health. U.S. spending on other health-related programs decreased from 2001 to 2003. However, total spending for these programs during this period was more than three times greater than the total for HIV/AIDS- related foreign assistance programs. 2004-2008. Following implementation of PEPFAR, U.S. bilateral spending on both global HIV/AIDS and other health-related programs increased overall, with more rapid and consistent growth in spending for HIV/AIDS programs. HIV/AIDS. In fiscal year 2004, U.S. spending on HIV/AIDS programs was roughly equivalent to the total for the previous 3 years combined; in fiscal year 2008, annual U.S. spending on global HIV/AIDS programs was nearly three times the 2004 total. In addition, U.S. spending on HIV/AIDS programs in 2005 was, for the first time, higher than spending on other health programs. By 2008, almost twice as much was spent on HIV/AIDS programs as on other health programs. Other health. Although U.S. spending on other health programs also increased overall from fiscal year 2004 through 2008, annual spending was less consistent and decreased in 2006 and 2007. Our analysis shows differences in growth trends in U.S. bilateral spending on HIV/AIDS and other health programs before and after implementation of PEPFAR for three distinct groups of countries: PEPFAR focus countries, nonfocus countries and regions with PEPFAR operational plans, and all other countries receiving HIV/AIDS foreign assistance (i.e., nonfocus countries receiving HIV/AIDS assistance that do not submit PEPFAR operational plans to OGAC). In fiscal years 2001 through 2003, U.S. bilateral spending on global HIV/AIDS programs grew for countries in all three groups, while spending on other health programs increased at lower rates. From 2004 through 2008, the average annual growth rate in U.S. bilateral spending on global HIV/AIDS programs was, predictably, greatest in focus countries, as was spending on other health programs in these countries (see table 3). For the 15 countries that would become PEPFAR focus countries, U.S. bilateral spending on both HIV/AIDS and other health programs increased steadily from 2001 through 2003, with higher growth for HIV/AIDS spending. From 2004 through 2008, U.S. bilateral spending on global HIV/AIDS-related foreign assistance programs continued to increase significantly, while spending on other health programs grew modestly overall. From 2004 through 2008, total U.S. bilateral spending on HIV/AIDS-related foreign assistance programs in PEPFAR focus countries was more than seven times greater than spending on other health programs. (See fig. 3.) For the 16 nonfocus countries and three regions that eventually would submit operational plans to receive PEPFAR funding, U.S. bilateral spending on both HIV/AIDS and other health-related foreign assistance programs increased from 2001 through 2003 (see fig. 4), but at lower rates and less consistently than for the focus countries. From 2001 through 2003, U.S. bilateral spending on other health-related foreign assistance programs was about three times greater than spending on HIV/AIDS programs in these countries and regions, although spending on HIV/AIDS programs grew more rapidly. From 2004 through 2008, U.S. bilateral spending on both global HIV/AIDS and other health programs increased overall, with greater spending on other health programs for the 5-year period. In all other countries that received some U.S. assistance for HIV/AIDS programs from 2001 through 2008 but did not submit PEPFAR operational plans—a total of 47 countries—U.S. bilateral spending on both HIV/AIDS and other health-related foreign assistance programs fluctuated from year to year but increased overall (see fig. 5). In addition, U.S. bilateral spending for other health programs greatly exceeded spending for HIV/AIDS programs both before and after the establishment of PEPFAR. From 2001 through 2003, U.S. bilateral spending on HIV/AIDS programs in these countries nearly quadrupled; spending on other health programs amounted to more than 12 times that for HIV/AIDS programs and increased slightly over the period. From 2004 through 2008, U.S. bilateral spending on other health programs continued to greatly exceed spending on HIV/AIDS-related programs in these countries; spending on both HIV/AIDS and other health programs fluctuated from year to year and grew at similar rates overall. In fiscal years 2001 through 2008, the majority of U.S. bilateral HIV/AIDS program spending was in sub-Saharan Africa, Asia, and Latin America and the Caribbean—three regions where the 15 PEPFAR focus countries and 14 of the 16 nonfocus countries with PEPFAR operational plans are located—with the greatest U.S. spending on global HIV/AIDS foreign assistance programs in sub-Saharan Africa. From 2004 through 2008, following the establishment of PEPFAR, the share of U.S. bilateral spending on other health programs directed to countries in sub-Saharan Africa and Latin America and the Caribbean declined, while the share of U.S. spending on other health programs in Asia and in other regions increased. (See fig. 6.) Average annual growth rates in spending on HIV/AIDS and other health programs also varied significantly across these three regions (see table 4). ble 4: Average Annual Growt Ta Related Foreign Assistance Spending, by Region, Fiscal Years 2001-2008 h Rates for Global U.S. Latin America and the Caribbean PEPFAR period (first 5 years) U.S. bilateral foreign assistance spending on HIVAIDS programs in sub- Saharan Africa—which includes 12 of the 15 focus countries and 8 of the 16 nonfocus countries with PEPFAR operational plans— increased rapidly both before and after the establishment of PEPFAR. In 2003, U.S. bilateral spending on HIV/AIDS programs was nearly two times greater, and by 2008 was more than four times greater than spending on other health programs. U.S. bilateral spending on other health programs declined overall from 2001 to 2003 and remained steady from 2004 to 2007, but began to grow substantially in 2008. (See fig. 7.) U.S. bilateral foreign assistance spending on both HIVAIDS and other health-related foreign assistance programs in Asia—where 1 of the 15 focus countries as well as 5 nonfocus countries and 1 region that submit PEPFAR operational plans are located—increased overall from 2001 to 2008. Overall bilateral spending on other health programs was three times larger than spending on HIV/AIDS programs throughout the period. (See fig. 8.) From 2001 through 2008, total U.S. bilateral foreign assistance spending on HIVAIDS programs in Latin American and the Caribbean—where 2 of the 15 focus countries as well as a nonfocus country and two regions with PEPFAR operational plans are located— increased continuously. During this period, U.S. bilateral spending on other health programs in these countries and regions fluctuated from year to year and declined overall. Bilateral spending on other health programs was consistently greater than spending on HIV/AIDS programs during this period; however, in 2008, annual spending on HIV/AIDS programs was nearly equal to spending for other health programs (see fig. 9). To inform policy and program decisions related, in part, to expanding efforts to provide ART in developing countries, OGAC, USAID, and UNAIDS have adopted three different models for ART cost analyses. OGAC uses the PEPFAR ART Costing Project Model (PACM) to estimate and track PEPFAR-supported ART costs in individual PEPFAR countries and across these countries. USAID and its partners use the HIV/AIDS Program Sustainability Analysis Tool (HAPSAT) to estimate resources needed to meet individual countries’ ART goals, among other things. UNAIDS and USAID use a suite of models referred to as Spectrum to project ART costs in individual countries and globally. Table 5 provides information on the three costing models. For additional information on the components of these three models, see appendix III. Although the models have different purposes, a 2009 comparison study conducted by their developers found that the three models produced similar overall ART cost estimates given similar data inputs. According to the models’ developers, data used for one model can be entered into another to generate cost estimates and projections. For example, cost data collected in Nigeria for use in HAPSAT were also used in PACM to inform PEPFAR global average treatment cost estimates. Such cost projections also can help decision makers to estimate the cost-related effects of policy and protocol changes, such as changes made in response to the World Health Organization’s November 2009 recommendation that HIV patients initiate ART at an earlier stage of the disease’s progression. In coordination with HHS and USAID, State’s OGAC reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of State, the Office of the Global AIDS Coordinator, USAID Office of HIV/AIDS, HHS Office of Global Health Affairs, and CDC Global AIDS Program. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Responding to legislative directives, this report examines U.S. bilateral foreign assistance spending on global HIV/AIDS and other health-related programs in fiscal years 2001-2008. The report also provides information on models used to estimate HIV treatment costs. To examine trends in U.S. bilateral spending on global HIV/AIDS- and other health-related foreign assistance programs, we analyzed data from the Foreign Assistance Database (FADB) provided by the U.S. Agency for International Development (USAID), interviewed State Department, USAID, and Health and Human Services (HHS) officials in Washington, D.C., and Centers for Disease Control and Prevention (CDC) officials in Atlanta. We also interviewed representatives of the Kaiser Family Foundation who have conducted similar research and analysis. We reviewed relevant articles and reports regarding international and U.S. global health assistance funding and examined relevant data on other donor and U.S. foreign assistance. Congress, U.S. agencies, and research organizations use varying definitions of global health programs, with inclusion of safe water and nutrition programs being one varying factor among definitions. Congress funds global health programs through a number of appropriations accounts: Foreign Operations; Labor, Education and Health; and Defense; and through several U.S. agencies. The State Department, USAID, and the HHS’ CDC are the primary U.S. agencies receiving congressional appropriations to implement global health programs, including programs to combat HIV/AIDS. Through foreign operations accounts administered by USAID and State, Congress specifies support for five key global health programs: child survival and maternal health, vulnerable children, HIV/AIDS, other infectious diseases, and family planning and reproductive health. In addition, Congress specifies support for five key CDC global health programs: HIV/AIDS, malaria, global disease detection, immunizations, and other global health. CDC also allocates part of its tuberculosis and pandemic flu budget for international programs, and State and USAID may transfer funds to CDC for specific activities. In addition to these programs, USAID and CDC include other programs related to global health. For example, USAID reports specific nutrition and environmental health programs in its global health portfolio. Likewise, CDC also uses its resources to provide international technical assistance when requested, such as for disease outbreak response (e.g., pandemic influenza preparedness and prevention), or reproductive health. The Committee on the U.S. Commitment to Global Health at the Institute of Medicine (IOM) defined global health programs as those aimed at improving health for all people around the world by promoting wellness and eliminating avoidable disease, disability and death. According to the Organisation for Economic Cooperation and Development (OECD), global health includes the following components: health care; health infrastructure; nutrition; infectious disease control; health education; health personnel development; health sector policy, planning and programs; medical education, training and research; and medical services. In its report on donor funding for global health, the Kaiser Family Foundation combined data from four OECD categories to construct its definition of global health: health; population policies and programs and reproductive health (which includes HIV/AIDS and sexually transmitted diseases); water supply and sanitation; and other social infrastructure and services. For the purposes of this report, we defined U.S. global spending for HIV/AIDS programs as foreign assistance for activities related to HIV/AIDS control, including information, education, and communication; testing; prevention; treatment; and care. We defined U.S. spending for other health-related programs as foreign assistance for general and basic health and population and reproductive health policies and programs (except those related to HIV/AIDS). General and basic health includes health policy and administrative management, medical education and training, medical research, basic health care, basic health infrastructure, basic nutrition, infectious disease control, health education, and health personnel development. Population and reproductive health policies and programs include population policy and administrative management, reproductive health care, family planning, and personnel development for population and reproductive health. The specific analyses presented in this report examine disbursement levels and growth trends from fiscal years 2001 to 2008 for bilateral HIV/AIDS and other health-related foreign assistance programs by time period (pre- PEPFAR and first 5 years of PEPFAR for all countries); PEPFAR country status (focus countries with PEPFAR operational plans, nonfocus countries with PEPFAR country or regional operational plans, and other nonfocus countries receiving HIV/AIDS-related foreign assistance from 2001 to 2008); and region (sub-Saharan Africa, Latin America and the Caribbean, and Asia, which received the majority of U.S. spending on bilateral HIV/AIDS-related foreign assistance). We examined disbursements—amounts paid by federal agencies to liquidate government obligations—of U.S. bilateral foreign assistance for global HIV/AIDS and other health programs, because, unlike other data, disbursement data directly reflect the foreign assistance reaching partner countries. We used USAID’s deflator to convert nominal dollar amounts to constant 2010 dollar amounts, which are appropriate for spending trend analysis. As such, it is important to remember that the disbursement figures for HIV/AIDS- and other health-related foreign assistance programs presented in this report differ from appropriation or commitment data which may be reported elsewhere. Because we focused on bilateral disbursements, our analysis excludes U.S. contributions to the Global Fund to Fight HIV/AIDS, Tuberculosis, and Malaria. In addition, about $4.7 billion and $3.3 billion in disbursements for HIV/AIDS programs and other health-related foreign assistance programs, respectively, from 2001 to 2008, were not specified for an individual country or region in the FADB. As such, our analysis of bilateral spending levels and growth trends by PEPFAR country status and geographical region excludes these disbursements. We assessed the reliability of disbursement data from the FADB and determined them to be sufficiently reliable for the purposes of reporting in this manner. In assessing the data, we interviewed USAID officials in charge of compiling and maintaining the FADB, reviewed the related documentation, and compared data to published data from other sources. We also determined that, in general, USAID takes steps to ensure the consistency and accuracy of the disbursements data reported by U.S. government agencies, including by verifying possible inconsistencies or anomalies in the data received, providing guidance and other communications to agencies about category definitions, and comparing the data to other data sources. Although we did not assess the reliability of the data for complex statistical analyses, we determined that the data did not allow the identification of causal relationships between funding levels over time or among relevant categories; as such, we did not attempt an empirical analysis of the impact of PEPFAR on other health funding. To describe models used to estimate the cost of providing antiretroviral therapy (ART), we interviewed State Office of the Global AIDS Coordinator, USAID and CDC officials in Washington, D.C., and Atlanta. We also interviewed Joint United Nations Programme on HIV/AIDS (UNAIDS) officials in Washington, D.C. and Geneva, Switzerland, as well as developers of the costing models. We analyzed user manuals and guides for these models, as well as spreadsheets and additional information and technical comments provided by the U.S. agencies and model developers. We reviewed relevant literature for information on ART costing models, as well as the Leadership Act and previous GAO work regarding requirements and importance of cost information for program decision making. For fiscal years 2001 to 2008, U.S. bilateral foreign assistance spending for HIV/AIDS-related health programs varied significantly by country for both the 15 PEPFAR focus countries and the 16 countries and three regions with PEPFAR operational plans. Table 6 presents U.S. bilateral foreign assistance spending in constant dollars, by country, on HIV/AIDS programs, for fiscal years 2001-2008. As noted in appendix I, we converted nominal dollar amounts to constant 2010 dollars, which are appropriate for analysis of trends in U.S. foreign assistance spending in global health, but do not represent in-year actual spending amounts. For fiscal years 2001 to 2008, U.S. bilateral foreign assistance spending for other health programs also varied significantly by country for both the 15 PEPFAR focus countries and the 16 countries and three regions with PEPFAR operational plans. Table 7 presents U.S. bilateral foreign assistance spending in constant dollars, by country, on other health-related (i.e., non-HIV/AIDS) programs, for fiscal years 2001-2008. As noted in appendix I, we converted nominal dollar amounts to constant 2010 dollars, which are appropriate for analysis of trends in U.S. foreign assistance spending in global health, but do not represent in-year actual spending amounts. To estimate total cost of ART, three key models—the PEPFAR ART Costing Project Model (PACM), HIV/AIDS Program Sustainability Analysis Tool (HAPSAT), and Spectrum—all consider the number of patients and various drug and nondrug cost estimates. PACM and HAPSAT also address overhead costs in total cost calculations. This appendix presents the specific drug and nondrug costs that each model considers in making estimates. PACM categorizes ART patients as adult or pediatric, new or established, receiving first- or second-line ARV drugs, receiving generic or innovator ARV drugs, and living in a low- or middle-income country. In addition, PACM considers the following cost categories: Drug costs. PACM categorizes ARV drug costs as generic or innovator and first- or second-line. For each of these categories, PACM accounts costs associated with supply chain, wastage, inflation, and ARV buffer stock. Nondrug costs. PACM categorizes nondrug costs as recurrent and investment costs. Recurrent costs include personnel, utilities, building, lab supplies, other supplies, and other drugs; facility-level management and overhead costs are also captured. Investment costs include training, equipment, and construction. Overhead. PACM categorizes above-facility-level overhead costs as U.S. government, partner government, and implementing partner overhead, as well as U.S. government indirect support to partner governments (e.g., U.S. government support for system strengthening or capacity building of the national HIV/AIDS program). Table 8 summarizes how PACM categorizes numbers of patients and various unit costs to calculate the total cost of ART based on estimates of PEPFAR and non-PEPFAR shares of costs derived from PEPFAR-funded empirical studies. HAPSAT categorizes current ART patients as those receiving first- or second-line ARV drugs. In addition, HAPSAT considers the following cost categories: Drug costs. HAPSAT categorizes drug costs as first- or second-line ARV drugs. Nondrug costs. HAPSAT categorizes nondrug costs as labor (e.g., doctor, nurse, lab technician salaries) and laboratory costs. Overhead. HAPSAT categorizes overhead as administrative costs, drug supply chain, monitoring and evaluation, and training, based on country data. Overhead estimates are applied at both the facility and above-facility level. Table 9 summarizes how HAPSAT categorizes numbers of patients and various unit costs to calculate the total cost of ART. Spectrum categorizes current ART patients as adult or pediatric and receiving first- or second-line ARV drugs. In addition, Spectrum considers the following cost categories: Drug costs. Spectrum categorizes drugs costs as first- or second-line ARV drugs. Nondrug costs. Spectrum categorizes nondrug costs as laboratory and service delivery (i.e., hospital and clinic stays). Service delivery costs include inpatient hospital and outpatient clinic costs. Table 10 summarizes how Spectrum categorizes numbers of patients and various unit costs to calculate the total cost of ART. In addition to the contact named above, Audrey Solis (Assistant Director), Todd M. Anderson, Diana Blumenfeld, Giulia Cangiano, Ming Chen, David Dornisch, Lorraine Ettaro, Etana Finkler, Kendall Helm, Heather Latta, Reid Lowe, Grace Lui, Jeff Miller, and Mark Needham made key contributions to this report. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4,2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: July 12, 2004. Global Health: Global Fund to Fight AIDS, TB, and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
U.S. funding for global HIV/AIDS and other health-related programs rose significantly from 2001 to 2008. The President's Emergency Plan for AIDS Relief (PEPFAR), reauthorized in 2008 at $48 billion through 2013, has made significant investments in support of prevention of HIV/AIDS as well as care and treatment for those affected by the disease in 31 partner countries and 3 regions. In May 2009, the President proposed spending $63 billion through 2014 on global health programs, including HIV/AIDS, under a new Global Health Initiative. The Office of the U.S. Global AIDS Coordinator (OGAC), at the Department of State (State), coordinates PEPFAR implementation. The Centers for Disease Control and Prevention (CDC) and the U.S. Agency for International Development (USAID), among other agencies, implement PEPFAR as well as other global health-related assistance programs, such as maternal and child health, infectious disease prevention, and malaria control, among others. Responding to legislative directives, this report examines U.S. disbursements (referred to as spending) for global HIV/AIDS- and other health-related bilateral foreign assistance programs (including basic health and population and reproductive health programs) in fiscal years 2001-2008. The report also provides information on models used to estimate HIV treatment costs. GAO analyzed U.S. foreign assistance data, reviewed HIV treatment costing models and reports, and interviewed U.S. and UNAIDS officials. In fiscal years 2001-2008, bilateral U.S. spending for HIV/AIDS and other health-related programs increased overall, most significantly for HIV/AIDS. From 2001 to 2003--before the establishment of PEPFAR--U.S. spending on global HIV/AIDS programs rose while spending on other health programs dropped slightly. From fiscal years 2004 to 2008, HIV/AIDS spending grew steadily; other health-related spending also rose overall, despite declines in 2006 and 2007. As would be expected, U.S. bilateral HIV/AIDS spending showed the most increase in 15 countries--known as PEPFAR focus countries--relative to other countries receiving bilateral HIV/AIDS assistance from fiscal years 2004 through 2008. In addition, GAO's analysis showed that U.S. spending on other health-related bilateral foreign assistance also increased most for PEPFAR focus countries. Spending growth rates varied among three key regions--sub-Saharan Africa, Asia, and Latin America and the Caribbean--as did these regions' shares of HIV/AIDS and other health foreign assistance spending following establishment of PEPFAR. OGAC, USAID, and UNAIDS have adopted three different models to estimate and project antiretroviral therapy (ART) costs. The three models--respectively known as the PEPFAR ART Costing Project Model, the HIV/AIDS Program Sustainability Analysis Tool, and Spectrum--are intended to inform policy and program decisions related, in part, to expanding efforts to provide ART in developing countries.
The Office of Acquisition and Materiel Management is the principal office within VA headquarters responsible for supporting the agency’s programs. The OA&MM includes an Office of Acquisitions that, among other things, provides acquisition planning and support, helps develop statements of work, offers expertise in the areas of information technology and software acquisition, develops and implements acquisition policy, conducts business reviews, and issues warrants for contracting personnel. As of June 2005, the Office of Acquisitions was managing contracts valued at over $18 billion, including option years. In recent years, reports have cited inadequacies in the contracting practices at VA’s Office of Acquisitions and also have identified actions needed to improve them. In fiscal year 2001, the VA IG issued a report that expressed significant concerns about the effectiveness of VA’s acquisition system. As a result, the Secretary of Veterans Affairs established, in June 2001, a Procurement Reform Task Force to review VA’s procurement system. The task force’s May 2002 report set five major goals that it believed would improve VA’s acquisition system: (1) leverage purchasing power, (2) standardize commodities, (3) obtain and improve comprehensive information, (4) improve organizational effectiveness, and (5) ensure a sufficient and talented workforce. Issues related to organizational and workforce effectiveness were at the center of the difficulties VA experienced implementing its Core Financial and Logistics System (CoreFLS). The VA IG and an independent consultant issued reports on CoreFLS in August 2004 and June 2004, respectively, and both noted that VA did not do an adequate job of managing and monitoring the CoreFLS contract and did not protect the interests of the government. Ultimately, the contract was canceled after VA had spent nearly $250 million over 5 years. In response to deficiencies noted in the CoreFLS reports, VA sought help to improve the quality, effectiveness, and efficiency of its acquisition function by requesting that NAVSUP perform an independent assessment of the Acquisition Operations Service (AOS). NAVSUP looked at three elements of the contracting process: management of the contracting function; contract planning and related functions; and special interest items such as information technology procurements, use of the federal supply schedule, and postaward contract management. In a September 2004 report, NAVSUP identified problems in all three elements. While VA agrees with the NAVSUP report’s recommendations, limited progress has been made in implementing the seven key recommendations of the report. VA officials indicate that factors contributing to this limited progress include the absence of key personnel, a high turnover rate, and a heavy contracting workload. We found that VA has neither established schedules for completing action on the recommendations nor established a method to measure its progress. Until VA establishes well-defined procedures for completing action on the NAVSUP recommendations, the benefits of this study may not be fully realized. The status of the seven key recommendations we identified is summarized in Table 1: Action taken by VA on the seven key recommendations in the NAVSUP report has varied from no action, to initial steps, to more advanced efforts in specific areas. Long-term improvement plan. NAVSUP recommended that AOS develop a long-term approach to address improvements needed in key areas. VA acknowledges that establishing a long-term improvement plan is necessary to maintain its focus on the actions that will result in desired organizational and cultural changes. During the course of our review, however, we found that no action has been taken to develop a long-term improvement plan with established milestones for specific actions. Adequate management metrics. NAVSUP recommended that AOS develop metrics to effectively monitor VA’s agencywide acquisition and procurement processes, resource needs, and employee productivity because it found it that AOS was not receiving information needed to oversee the contracting function. VA officials agree that they need to have the ability to continuously and actively monitor acquisitions from the pre- award to contract closeout stages to identify problem areas and trends. VA officials acknowledge that, without adequate metrics, its managers are unable to oversee operations and make long-term decisions about their organizations; customers cannot review the status of their requirements without direct contact with contracting officers; and contracting officers are hampered in their ability to view their current workload or quickly assess deadlines. During our review, VA officials stated that they intend to use a balanced scorecard approach for organizational metrics in the future. However, no steps had been taken to establish specific metrics at the time we completed our review. Strategic planning. NAVSUP recommended that AOS develop a supplement to the OA&MM strategic plan that includes operational-level goals to provide employees with a better understanding of their roles and how they contribute to the agency’s strategic goals, objectives, and performance measures. VA officials indicated that progress on the strategic plan had been delayed because it will rely heavily on management metrics that will be identified as part of the effort to develop a balanced scorecard. With the right metrics in place, VA officials believe they will be in a much better position to supplement the strategic plan. VA had not revised the strategic plan by the time we finished our review. Process to review contract files at key acquisition milestones. NAVSUP recommended that AOS establish a contract review board to improve management of the agency’s contract function. NAVSUP believed that a contracting review board composed of senior contracting officers would provide a mechanism to effectively review contracting actions at key acquisition milestones and provide needed overall management. To enhance these reviews, VA has prepared draft standard operating procedures on how contract files should be organized and documented. Final approval is pending. VA officials indicated, however, that no decisions have been made about how or when they will institute a contract review board as part of the agency’s procurement policies and processes. Postaward contract management. NAVSUP recommended that the AOS contracting officers pay more attention to postaward contract management by developing a contract administration plan, participating in postaward reviews, conducting contracting officer technical representative reviews, and improving postaward file documentation. We found that VA has taken some action to address postaward contract management. For example, AOS is training a majority of its contracting specialists on the electronic contract management system. VA officials indicated that the electronic contract management system will help improve its postaward contract management capability. The electronic contract management system is a pilot effort that VA expects to be operational in early 2006. Also, final approval for a draft standard operating procedure for documenting significant postaward actions is pending. Customer relationships. NAVSUP reported that VA’s ability to relate to its customers is at a low point and recommended VA take action to improve customer relations. Some mechanisms VA officials agreed are needed to improve customer relations include requiring that program reviews include both the customer and contracting personnel, greater use and marketing of the existing customer guide to customers and contracting communities, the establishment of a customer feedback mechanism such as satisfaction surveys, placing a customer section on the World Wide Web, and engaging in strategic acquisition planning with customer personnel. We noted that VA is taking some of the actions recommended by NAVSUP. For example, VA has established biweekly meetings with major customer groups, created customer-focused teams to work on specific projects, and nearly completed efforts to issue a comprehensive customer guide. Pending are efforts to include customers in the AOS review process and to develop a customer section on the web site. Employee morale. The NAVSUP report said that VA employee morale is at a low point and is having an impact on employee productivity. NAVSUP said that AOS needs to respond to its employee morale issue by addressing specific employee concerns related to workload distribution, strategic and acquisition planning, communication, and complaint resolution. VA has taken several actions related to employee morale. Workload distribution issues have been addressed by developing a workload and spreadsheet tracking system and removing restrictions on work schedules for employees at ranks of GS-15 and below. Strategic planning actions completed include the development of mission and vision statements by a cross section of VA personnel and collective involvement in approval of organizational restructuring efforts. Communication and complaint resolution issues are being resolved by facilitating a meeting between AOS management and employees to air concerns. Partially completed actions include the development of new employee training module, including a comprehensive new employee orientation package. According to VA, new employee training includes the dissemination of draft standard operating procedures. VA is also in the process of developing an employee survey to measure overall employee satisfaction. Discussions with VA officials indicate that the agency believes its limited progress has largely been due to the absence of permanent leadership and insufficient staffing levels. Officials told us that the recommendations will be implemented once key officials are in place. For example, positions for two key VA acquisition managers—Associate Deputy Assistant Secretary for Acquisitions and the Director for AOS—were unfilled for about 25 months and 15 months, respectively. But during the course of our review these positions were filled. As of August 25, 2005, AOS has still not selected permanent personnel for 17 of its 62 positions. This includes two other key management positions— the Deputy Director of Field Operations and the Deputy Director for VA Central Office Operations, both filled by people in an acting role. Supervisory leadership has also suffered as a consequence of understaffing, VA officials said. Four of the eight supervisory contract specialist positions are filled by people in an acting role. Critical nonsupervisory positions also have remained unfilled, with 11 contract specialists’ positions vacant. The absence of contract specialists has largely been caused by a high turnover rate. According to VA officials, the high turnover rate can be attributed to a heavy contracting workload, as well as the other factors identified in the NAVSUP report. When asked, the VA officials we spoke with could not provide specific time frames for completing actions on the recommendations or a method to measure progress. We believe the lack of an implementation plan with time frames and milestones, as well as a way to measure progress, contributed to VA’s limited progress in implementing the key NAVSUP recommendations. The seven key NAVSUP recommendations we identified have not been fully implemented. While some progress is being made, progress is lacking in those areas that we believe are critical to an efficient and effective acquisition process. If key recommendations for improvement are not adequately addressed, VA has no assurance that billions of its Office of Acquisitions contract dollars will be managed in an efficient and effective manner, or that it can protect the government’s interest in providing veterans with high-quality products, services, and expertise in a timely fashion at a reasonable price. While personnel-related factors have contributed to VA’s lack of progress, the absence of schedules for completion of actions and of metrics that could be used to determine agency progress is also an important factor. Current VA officials, even those in an acting capacity, can identify timetables for completing action on key NAVSUP recommendations and establish a means to determine progress. Without these elements of an action plan, the benefits envisioned by the study may not be fully realized. We recommend that the Secretary of Veterans Affairs direct the Deputy Assistant Secretary for Acquisition and Materiel Management to identify specific time frames and milestones for completing actions on the key NAVSUP recommendations, and establish a method to measure progress in implementing the recommendations. In commenting on a draft of this report, the Deputy Secretary of Veterans Affairs agreed with our conclusions and concurred with our recommendations. VA’s written comments are included in appendix III. We will send copies of this report to the Honorable R. James Nicholson, Secretary of Veterans Affairs; appropriate congressional committees; and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Blake Ainsworth, Penny Berrier, William Bricking, Myra Watts Butler, Christina Cromley, Lisa Simon, Shannon Simpson, and Bob Swierczek. In September 2004, the Naval Supply System Command (NAVSUP) issued a report, Procurement Performance Management Assessment Program on its review of the Department of Veterans Affairs, Office of Acquisition and Materiel Management, Acquisition Operations Service. The 24 recommendations contained in the NAVSUP report are listed in table 2 below. The first seven recommendations listed are the key recommendations we identified. To select the key recommendations from those identified in the NAVSUP September 2004 report, we focused on recommendations that, if successfully implemented, are likely to have the broadest and most significant impact on the Department of Veterans Affairs’ (VA) operations. We chose recommendations that are crosscutting in nature. Accordingly, in many instances recommendations we did not identify as being key are nevertheless, we believe, covered to some extent by one or more of the key recommendations. In making our selections, we relied primarily on our professional judgment and the experience gained over many years in reviews of acquisition management issues governmentwide. In particular, we relied on the observations and guidance captured in a draft of a GAO report entitled Framework for Assessing the Acquisition Function at Federal Agencies. With this insight, we determined that 7 of the 24 NAVSUP recommendations were key. To identify the progress VA has made in implementing these seven key NAVSUP recommendations, we met with acquisition officials at VA’s Office of Acquisition and Materiel Management (OA&MM). We also reviewed documents intended to demonstrate the status of VA’s actions. In order to attain a broader view of VA acquisition issues, we identified and reviewed other VA and independent reports issued prior to the NAVSUP report. This included VA’s Procurement Reform Task Force (May 2002) report, which recommended ways to improve procurement practices across VA, and reports by the VA Inspector General (August 2004) and Carnegie Mellon (June 2004) that noted contract management problems on a VA contract for the Core Financial and Logistics System (CoreFLS). We reviewed past and current policies, procedures, and internal controls associated with VA acquisition processes. We obtained statistics from OA&MM on the authorized size of the VA Acquisitions Operations Service (AOS) contracting workforce and positions that still need to be filled. We obtained data from the Federal Procurement Data System on what VA spent during fiscal year 2004 for products and services. Further, we obtained data from VA on the amount of contract dollars being managed by VA’s Office of Acquisitions as of June 2005. We did not conduct an independent assessment of the state of the acquisition function at VA. We conducted our work from March to August 2005 in accordance with generally accepted government auditing standards.
The Department of Veterans Affairs (VA) is among the largest federal acquisition agencies, spending $7.3 billion on product and service acquisitions in 2004 alone. Recent reports by VA and other organizations identified weaknesses in the agency's acquisition function that could result in excess costs to the taxpayer. One report by the Naval Supply Systems Command (NAVSUP) made 24 recommendations to improve VA's acquisition function. VA has accepted these recommendations. GAO was asked to review the progress VA has made in implementing the key NAVSUP recommendations. GAO identified 7 of the 24 recommendations as key, based primarily on its professional judgment and prior experience. Progress made by the Department of Veterans Affairs in implementing the key recommendations from the NAVSUP report has been limited. In fact, a year after the report was issued, VA has not completed actions on any of the seven key recommendations GAO identified. While VA agrees implementation of the key recommendations is necessary, the steps it has taken range from no action to partial action. No action has been taken on three key recommendations: to develop a long-term improvement plan, adequate management metrics, and a supplement to the agency's strategic plan. No more than partial action has been taken on four others: establishment of a contract review board for reviewing files at key milestones along with improvement of postaward contract management, customer relationships, and employee morale. A lack of permanent leadership in key positions has contributed to the lack of further progress in revising acquisition policies, procedures, and management and oversight practices, according to VA officials. For example, two key VA acquisitions management positions were unfilled--one for 15 months and the other for 25 months. In addition, VA has neither set time frames for completing actions on the NAVSUP recommendations nor established a method to measure progress. Until VA establishes a process for completing action on the NAVSUP recommendations, the benefits of the study may not be fully realized.
As figure 1 shows, services spending for the federal government has accounted for over half the annual procurement spending since fiscal year 2008. In fiscal year 2012, the federal government obligated about $307 billion to acquire services. Table 1 lists the top services purchased by federal agencies in fiscal year 2012, which range in complexity from defense research and development to housekeeping. We previously reported that agencies have had difficulty managing services acquisitions and have purchased services inefficiently which places them at risk of paying more than necessary.can be attributed to several factors. First, agencies have had difficulty defining requirements for services, such as developing clear statements of work which can reduce the government’s risk of paying for more services than needed. Second, agencies have not always leveraged knowledge of contractor costs when selecting contract types, including time-and-materials contracts, performance-based contracts, and undefinitized contracts. Third, agencies have missed opportunities to These inefficiencies increase competition for services due to overly restrictive and complex requirements; a lack of access to proprietary, technical data; and supplier preferences. Agencies purchase services under the Federal Acquisition Regulation, which places some constraints on how contracts are competed and awarded. Generally, agencies are statutorily required to award contracts using full and open competition, unless an exception applies. Additionally, agencies are subject to certain requirements when awarding contracts, such as meeting the Small Business Administration’s annual statutory goals to make awards to various kinds of small businesses. GAO has been assessing strategic sourcing and the potential value of applying these techniques to federal acquisitions for more than a decade. In 2002, GAO reported that leading companies of that time committed to a strategic approach to acquiring services—a process that moves a company away from numerous individual procurements to a broader aggregate approach—including developing knowledge of how much they were spending on services and taking an enterprise-wide approach to services acquisition. As a result, companies made structural changes with top leadership support, such as establishing commodity managers— responsible for purchasing services within a category—and were better able to leverage their buying power to achieve substantial savings. We have emphasized the importance of comprehensive spend analysis for efficient procurement since 2002. Spend analysis provides knowledge about how much is being spent for goods and services, who the buyers are, who the suppliers are, and where the opportunities are to save money and improve performance. In 2005, the Office of Management and Budget (OMB) defined strategic sourcing as a structured process based on spend analysis to make business decisions about acquiring commodities and services more efficiently and effectively. In 2007, GAO reviewed DOD’s processes for acquiring services and found that DOD could take further action to improve its strategic sourcing. GAO reported that the department’s approach had tended to be reactive and did not fully address key factors for success at either the strategic (organization-wide) or transactional (individual services transaction) level. GAO recommended that DOD take a proactive approach to managing strategic and transactional level service acquisitions elements, including communicating where individual transactions can then be made to support strategic goals. Key factors at the transactional level included clearly defined requirements, appropriate contracting vehicles, and effective contractor oversight. The leading companies we studied used a strategic sourcing approach to achieve sustained annual savings, over prior year spending, of 4-15 percent in services over the last 5-7 years. This strategic approach drives companies to continually strive for savings, partly spurred by annual goals. To enable this process, companies rely on five foundational principles to build spending and market knowledge and gain situational awareness of their procurement environment. In the short term, companies use this knowledge to adjust their procurement tactics for different types of services depending on service complexity and number of available suppliers in order to best achieve savings and efficiencies. This enables companies to target the full range of services they buy. In the long term, companies try to address their procurement constraints by reducing requirements complexity to commoditize services and developing new suppliers to increase competition. This allows companies to more aggressively leverage their buying power for all types of services. Each of the leading companies we reviewed used annual savings expectations to drive a corporate culture of savings. For example, Walmart’s executive leadership sets annual savings goals for their services procurement division; in 2012, the goal was to save around $100 million, or about 8 percent, of the division’s $1.2 billion budget. A Walmart official emphasized the importance of savings expectations and accountability, noting that the very act of establishing goals and metrics helps enable a culture of savings. “If you measure it, it will happen.” Companies further translate savings expectations into individual performance goals to which executives and employees are held accountable. Savings expectations are allocated to each procurement team member and the head of procurement regularly reviews progress. This metric-based accountability spurs companies’ culture of savings, but is not necessarily dictated by leadership. Dell establishes savings expectations and metrics in cooperation with procurement staff. These metrics are tied to individual performance contract goals to which staff are held accountable with quarterly reviews. Top performers are tapped to lead teams and manage critical procurement projects. At Delphi, savings targets are initiated by both procurement staff and leadership. The process continues iteratively until all parties agree to a final savings target, including after considering areas where costs are expected to actually rise due to economic conditions. Additional incentives can drive this culture of savings; for example, one company allows business units to reinvest savings into their operations, primarily to fund innovation and strategic initiatives. Having such a culture is critical to companies’ continual pursuit of savings and efficiencies. A Pfizer official described this culture as being part of the company’s DNA. “Savings is in our DNA.” Leading companies reported achieving a sustained savings rate of 4-15 percent annually on services procurement by strategically sourcing the full range of services they buy. Companies achieved these savings after increasing their focus on services acquisitions over the last 5-7 years. Companies reported using the same general procurement strategies and tactics for both goods and services. However, the impetus for an increased focus on services was leadership recognition that spending had significantly increased, resulting in additional dedicated resources to manage this area. Companies chose different paths to begin the process of improving the efficiency of services acquisition. For example, Pfizer conducted a spend analysis in 2007 which revealed increased spending on legal, consulting, and financial services, as well as opportunities for improving the efficiency of its processes. Boeing’s Chief Executive Officer made tackling services spending a priority after noticing how much was being spent in this area compared to the company’s spending on items directly affecting customers. This led Boeing to adopt a company-wide procurement model in 2006. Delphi centralized indirect procurement—products and services used in support of Delphi’s operations—between 2005 and 2007 in order to improve efficiency and drive savings. Some companies reported achieving the greatest amount of savings in the initial years after prioritizing their procurement of services. For example, Dell reported achieving 23 percent savings in the first year after increasing its focus on improving services acquisition, and has been able to sustain 10 percent in savings thereafter. Another company noted that it is typical to achieve larger savings at the beginning and that the savings percent goes down over time, to around 4-7 percent annually. Table 2 below highlights examples of the annual savings companies reported in 2012, and the main services they buy. Leading companies generally agreed that foundational principles— maintaining spend visibility, centralizing procurement, developing category strategies, focusing on total cost of ownership, and regularly reviewing strategies and tactics—are all important to achieving successful services acquisition outcomes. Taken together, these principles enable companies to better identify and share information on spending and increase market knowledge about suppliers to gain situational awareness of their procurement environment. This awareness positions companies to make more informed contracting decisions. Each company we spoke with had a history of struggling with fragmented information on spending, which did not allow them to spot inefficiencies or opportunities for consolidating purchases. For example, Humana’s Chief Procurement Officer (CPO) conducted a year-long spend analysis effort which revealed, among other things, cases where a supplier charged different rates to different departments for the same service. Dell emphasized the importance of actively monitoring spending trends in order to identify opportunities for savings. As one official put it, unmanaged spend is by definition inefficient. “Unmanaged spend equals inefficiency.” To address these issues, companies maintain visibility into spending by integrating procurement and financial systems across the organization. For example, in 1999, Boeing upgraded its paper-based manual processes for procurement to a new, automated system which enabled more efficient spend analysis. The new system also increased operational efficiency because it provided a common language and data set for the procurement staff. To aid efficient spend analysis, Boeing defined a services taxonomy to allow analysis at the invoice line item level. Invoice line items are defined beforehand—called billing units—and are built into contract statements of work. Dell, Delphi, and Humana also reported using centralized databases that provide transparency into their global spend. Similarly, in 2012, Walmart started implementing a centralized database to increase spend visibility. In addition to leveraging knowledge about spending, leading companies centralize procurement decisions by aligning, prioritizing, and integrating procurement functions within the organization. The companies we spoke with overcame the challenge of having a decentralized approach to purchasing services, which had made it difficult to share knowledge internally or use consistent procurement tactics. Without a centralized procurement process, officials told us, companies ran the risk that different parts of the organization could be unwittingly buying the same item or service, thereby missing an opportunity to share knowledge of procurement tactics proven to reduce costs. Company officials noted that centralizing procurement does not necessarily refer to centralizing procurement activity, but to centralizing procurement knowledge. For example, Dell’s procurement organization is centralized, and utilizes a common tool, enabling cost data to be shared globally. Global Category Managers are expected to have a good understanding of all aspects of the services within their category. Clearly defined and communicated policies ensure users cannot engage with suppliers without procurement organization involvement. Similarly, Pfizer has “category teams” with a global reach for broad groupings of services with category managers assigned to each category. The global category zone leads are managed by Vice-Presidents who report directly to the head of the procurement organization and work with internal business partners, who execute contracts on a company-wide basis or locally as needed. Boeing has a team of financial analysts that support the procurement function by conducting “should cost” analyses and providing supplier or service cost breakdowns to procurement agents. “Centralize the knowledge, not the activity.” Company officials told us that the key to an effective centralized process is ensuring that services spending goes through approved contracts. A Walmart official referred to non-approved spending as “rogue buying.” Companies focus on compliance in order to eliminate unapproved purchases. For example, Delphi aims for 95 percent of its sourcing to adhere to pre-approved category strategies. The company uses an internally developed database to manage all sourcing initiatives from concept to business case to approval by the procurement and financial organizations. Pfizer has a policy that procurement transactions over $100,000 must be competitively bid with limited, documented exceptions. “You must eliminate rogue buying.” Companies develop category-specific procurement strategies with stakeholder buy-in in order to use the most effective sourcing strategies for each category. Category-specific procurement strategies describe the most cost-effective sourcing vehicles and supplier selection criteria to be used for each category of service, depending on factors such as current and projected requirements, volume, cyclicality of demand, risk, the services that the market is able to provide, supplier base competition trends, the company’s relative buying power, and market price trends. For example, Dell’s Global Category Managers oversee teams that develop detailed sourcing strategies for each commodity. The major components of the sourcing strategies are (1) internal analyses which include spend analysis, stakeholder analysis, business requirements, and cost modeling; and (2) external analyses which include market research and supply market analysis. Company officials told us that category strategies help them conduct their sourcing according to a proactive strategic plan and not just on a reactive, contract-by-contract basis. One company’s CPO referred to the latter as a “three bids and a buy” mentality that can be very narrowly focused and result in missed opportunities such as not leveraging purchases across the enterprise or making decisions based only on short term requirements. For this reason, Boeing sometimes chooses to execute a short-term contract to buy time if market research shows a more competitive deal can be obtained later. “You cannot just go with a ‘three bids and a buy’ contracting approach.” Category strategies also help companies choose sourcing tactics appropriate to their circumstances. For example, as one company noted, in one category it may be very beneficial to conduct competitive bidding via online reverse auctions, while in another category it may be wise to forego any competitive bidding and extend and lock in pricing based on market dynamics.strategies for travel and information technology services in order to leverage purchases globally, but region-based strategies for services such as facilities management which are used by individual Delphi facilities. Companies develop strategies that identify the choice of sourcing vehicles and supplier selection criteria only after extensive consultation with internal users. This consultation helps procurement staff better understand user requirements as well as obtain their buy-in. According to one company CPO, user buy-in is critical; otherwise users may think that a desire to reduce cost is the only factor driving the choice of sourcing tactics. In another instance, Delphi has global, company-wide Risk is an important consideration when developing category strategies and setting priorities. Dell considers factors such as data privacy and security, financial stability, continuity of supply, and geographic economic conditions to ensure the proper considerations and protections are in place prior to finalizing supplier selection decisions. Routine services such as store and parking lot maintenance services are critical to Walmart’s mission of retail, demanding a high level of attention. Delphi and Boeing have a policy of minimizing the risk of transitioning to new suppliers. For this reason, Boeing retains some staff with subject matter expertise to oversee contracts in each category and know how the supplier is meeting those requirements. This ensures flexibility for Boeing in case the company chooses to change suppliers in the future. Companies focus on total cost of ownership—making a holistic purchase decision by considering factors other than price. At the strategic—or higher—level, managing internal demand is an important element of reducing total cost of ownership. For example, Humana closely examines services requirements in order to prevent unnecessary spending on services the company does not absolutely need. Boeing considers internal costs, such as the administrative cost per transaction or purchase order to determine price and efficiency trade-offs. Dell considers factors such as risk to the company’s mission, innovation, operational performance, and demand management. Dell and Delphi examine suppliers’ management models for maturity, including how well they manage and train staff and use appropriate cost management tools. In fact, a Dell official said that the quality of a service is largely determined by the quality of the supplier’s management structure. “When purchasing a service, you are essentially paying for the quality of suppliers’ management processes.” At the transactional—or lower—level, incorporating non-price factors can be important inputs into decision making. For example, while Walmart may often award a contract to the lowest bidder, it takes other considerations into account—such as average invoice price, time spent on location, average time to complete a task, supplier diversity, and sustainability—when awarding contracts. Humana is developing internal rate cards for consulting services that would help the company evaluate contractors’ labor rates based on their skill level. Pfizer’s procurement organization monitors compliance with company processes and billing guidelines. The company considers its procurement professionals as essentially risk managers rather than contract managers because they need to consider what’s best for the company and how to minimize total cost of ownership while maintaining flexibility. Companies regularly review strategies and tactics to adapt to market trends. This provides room for flexibility in managing suppliers— something companies we spoke with valued. Walmart officials emphasized the importance of frequently reviewing tactics in order to identify new opportunities for savings. For this reason, Walmart constantly evaluates new ways to invite bids and new types of pricing tiers by which to lower prices, such as by state or region or volume. Delphi’s strategies are formally reviewed and documented annually by Delphi’s strategic council comprised of senior company executives. These reviews may result in changing tactics or suppliers according to predetermined goals. For example, for a particular category, Delphi may not want to represent more than a certain percentage of any supplier’s revenue in order to minimize risk that the supplier may be overly dependent on Delphi for long-term viability. If the reviews highlight cases where the limit is exceeded, Delphi examines ways to bring in an additional supplier. “You must continually stay ahead of suppliers or they will figure you out.” Similarly, Dell regularly assesses whether to “make or buy” services, conducting objective evaluations of internal capability versus that of external providers. In some instances, although Dell may have the capability in-house, they may not have available resources at that point in time, and may therefore elect to purchase that particular service. In order to retain flexibility to adapt to market trends, companies view long-term contracts (generally over 3 years) as risky. For example, Pfizer will examine market conditions and unbundle contracts—use separate contracts for multiple services—for greater transparency and to bring more suppliers into the mix; later on Pfizer might bundle contracts to gain leverage as part of the strategy. Similarly, Delphi prefers contract lengths of under 3 to 4 years because of the difficulty of predicting the future price trends of key cost components—for example, fuel, which is a significant cost component of services involving travel. By following the foundational principles to improve knowledge about their procurement environment, companies are well positioned to choose procurement tactics tailored to each service. While companies emphasize the importance of observing the principles, including category strategies, they do not take a one-size-fits-all approach to individual service purchase decisions. Two factors—the degree of complexity of the service and the number of available suppliers—determine the choice of one of four general categories of procurement tactics appropriate for that service: leveraging scale, standardizing requirements, prequalifying suppliers, and understanding cost drivers. Figure 2 below shows how the two factors help companies categorize different services and select appropriate tactics. Complexity is defined as the relative difficulty of defining performance requirements, and varies for different types of services. Less complex services—referred to as commodity services—are those where requirements are relatively easy to define and performance more clearly measured; for example, housekeeping, telecommunications, and maintenance services. More complex services—referred to as knowledge-based services—are those where requirements are more complex, performance more difficult to measure, and where service provider staff skill levels are paramount; for example, research and development, engineering and management support, and legal services. The number of suppliers that can fulfill a service varies depending on market conditions and whether specialized skills and knowledge are required. Based on our discussions with companies, table 3 shows how different services may be categorized according to these two factors. For illustration purposes, the table shows the two factors at the extremes of their range of possibilities. Companies we reviewed are not content to remain limited by their environment; over the long term, they generally seek to reduce the complexity of requirements and bring additional suppliers into the mix in order to commoditize services and leverage competition. This dynamic, strategic approach has helped companies demonstrate annual, sustained savings. Companies generally aim to commoditize services over the long term as much as possible because, according to them, the level of complexity directly correlates with cost. Companies also aim to increase competition, whether by developing new suppliers or reducing requirements complexity, which could allow more suppliers to compete. In doing so, companies can leverage scale and competition to lower costs. “Complexity drives cost.” Figure 3 below depicts most companies’ overall goal of commoditizing services over the long term, as depicted by the goal of moving services to the lower-left quadrant of companies’ transactional framework shown earlier. The two factors—complexity and supplier availability—influence what tactics are best suited to each quadrant of services as shown in table 4 below. For commodity services with many suppliers, such as administrative support, facilities maintenance, and housekeeping, companies generally focus on leveraging scale and competition to lower cost. The figure on the left shows the companies’ transactional framework discussed earlier and highlights the quadrant represented by commodity services that are served by many suppliers. Typical tactics applicable to this quadrant of services include consolidating purchases across the organization; using fixed price contracts; developing procurement catalogs with pre- negotiated prices for some services; and varying bidding parameters such as volume and scale in order to find new ways to reduce costs. For example, Walmart continually lowers costs on store maintenance services such as parking lot maintenance by inviting bids on a regional or national basis. Bidders are required to submit quotes based on a variety of options that are thoroughly discussed ahead of time, such as the number of stores or regions and contract length. This helps Walmart identify new contract parameters by which to reduce costs. Boeing has begun developing procurement catalogs for commonly acquired routine and low- dollar services. The catalogs list approved suppliers and negotiated prices to allow users to directly execute contracts up to a certain amount. For commodity services with few suppliers, such as specialized logistics and utilities, companies focus on standardizing requirements. Typical tactics applicable to this quadrant of services include paring back requirements in order to bring them more in line with standard industry offerings, and developing new suppliers to maintain a competitive industrial base. For example, Walmart holds pre-bid conferences with suppliers such as those supplying store security for “Black Friday”—the major shopping event on the day after Thanksgiving—to discuss requirements and what suppliers can provide. Delphi makes an effort to maintain a competitive industrial base by dual-sourcing certain services in order to minimize future risk—a cost trade-off. For knowledge-based services with many suppliers, such as information technology, legal, and financial services, companies prequalify and prioritize suppliers to highlight the most competent and reasonable suppliers. Typical tactics applicable to this quadrant of services include prequalifying suppliers by skill level and labor hour rates; and tracking supplier performance over time in order to inform companies’ prioritization of suppliers based on efficiency. For example, Pfizer Legal Alliance was created to channel the majority of legal services to pre-selected firms. Delphi only awards contracts to companies on their Category Approved Supplier List. The list is approved by Delphi leadership and is reviewed annually. For knowledge-based services with few suppliers, such as engineering and management support and research and development services, companies aim to maximize value by better understanding and negotiating individual components that drive cost. Typical tactics applicable to this quadrant of services include negotiating better rates on the cost drivers for a given service; closely monitoring supplier performance against pre-defined standards; benchmarking supplier rates against industry averages in order to identify excess costs; and improving collaboration with suppliers. Some companies leverage their knowledge of cost drivers in order to use time and materials contracts—a contract type that we have reported as high-risk, mainly because of inadequate oversight—because that allows them to negotiate individual rates. For example, Dell’s forensic costing process breaks down service costs to the smallest component—for example, labor rates and even raw materials such as fuel. Cost knowledge is shared throughout Dell’s procurement organization, providing an advantage in negotiating contracts. Boeing uses benchmark clauses in some contracts, requiring that supplier rates be within a specified percentage of the benchmarked average as determined by third-party research firms such as Gartner Group. To improve collaboration with suppliers, Pfizer aims to build a single global account management team in order to have one point of contact globally that can solve issues and manage the Pfizer relationship holistically. Federal agencies have opportunities to leverage leading companies’ practices for purchasing services in order to lower costs and maximize the value of the services they buy. In our September 2012 report on strategic sourcing, we found that most of the agencies we reviewed leveraged only a fraction of their buying power. Specifically, we found that four agencies—DOD, Department of Homeland Security (DHS), Department of Energy (Energy), and Department of Veterans Affairs (VA)—accounted for 80 percent of federal procurement spending in fiscal year 2011, but managed only 5 percent, or $25.8 billion, of the $537 billion spent on federal procurement through strategic sourcing contracts. Their strategic sourcing efforts resulted in $1.8 billion in savings. When strategic sourcing contracts were used, selected federal agencies generally reported achieving savings between 5 and 20 percent. However, we reported that many agencies did not address the categories that represented their highest spending, the majority of which exceeded $1 billion and most of which were services. Agencies also continued to face challenges in obtaining and analyzing reliable and detailed data on spending, securing leadership support, and acquiring services through strategic sourcing. Adoption of leading companies’ practices could help agencies increase the portion of and types of services they strategically source. For example, leading company practices show how agencies could adopt tailored tactics to better target services that have been considered too difficult to strategically source, such as professional services. Moreover, leading companies have saved between 4 and 15 percent annually—over prior year spending—on services using these practices. A savings rate of 4 percent applied to the $307 billion spent by federal agencies on services in fiscal year 2012 would equate to $12 billion in savings. In December 2012, OMB directed agencies to take actions to better coordinate and gain more visibility into spending to overcome these challenges. In September 2012, GAO reported that many large procurement agencies were in the early stages of implementing strategic sourcing and had achieved limited results. For example, in fiscal year 2011, DOD, DHS, Energy, and VA accounted for 80 percent of the $537 billion in federal procurement spending, but reported managing only about 5 percent of that spending, or $25.8 billion, through strategic sourcing efforts. These agencies reported savings of $1.8 billion—less than one-half of 1 percent of federal procurement spending. Further, most of these agencies’ strategic sourcing efforts did not address their highest spending areas—including services—which may have provided opportunities for additional savings. For example, we reported that VA had efforts underway to address only 3 of its top 10 spending categories as of September 2012. As discussed later in this report, we recommended that selected agencies identify strategic sourcing opportunities for their highest spending categories, and agencies concurred. By contrast, DHS reported that nearly 20 percent of its fiscal year 2011 procurement spending was directed through strategically sourced contracts which included the majority of its top ten products and services. While strategic sourcing may not be suitable for all procurements, industry groups have reported that leading companies they surveyed strategically manage about 90 percent of their procurements. Moreover, officials from leading companies we spoke with reported that their annual savings for services are between 4 and 15 percent. GAO-12-919. through these initiatives. The FSSI mission is to encourage agencies to aggregate requirements, streamline processes, and coordinate purchases of like products and services in order to leverage spending to the maximum extent possible. Additionally, the Navy reported spending $145 million and achieving savings of $30 million through its strategic sourcing efforts in fiscal year 2011; the reported savings was almost 21 percent of the spending that went through strategic sourcing vehicles. Agencies also continued to face challenges in obtaining and analyzing reliable and detailed data on spending, securing leadership support for strategic sourcing, and applying this approach to acquiring services. In 2012, we reviewed the use of strategic sourcing across agencies with the largest procurement budgets in fiscal year 2011 and found that they were reluctant to apply strategic sourcing techniques to services, especially more complex ones. Additionally, these agencies did not sufficiently support strategic sourcing efforts with staff and other resources. These challenges make it difficult for agencies to identify opportunities for strategic sourcing or measure the success of ongoing initiatives. In our strategic sourcing report, we found that agencies and federal strategic sourcing programs generally continued to rely on the government’s current system for tracking contracting information data, and noted numerous deficiencies with this data for the purposes of conducting strategic sourcing research.obtain knowledge of procurement spending is a foundational component of an effective strategic approach. The analysis reveals how much is spent each year, what was bought, from whom it was bought, and who was purchasing it. The analysis also identifies where numerous suppliers are providing similar goods and services—often at varying prices—and where purchasing costs can be reduced and performance improved by better leveraging buying power and streamlining the number of suppliers to meet needs. For example, in a report on the use of strategic sourcing for office supplies, we reported that the General Services Administration (GSA) estimated federal agencies spent about $1.6 billion during fiscal year 2009 purchasing office supplies from more than 239,000 vendors. GSA used available data on spending to support development of the Office Supplies Second Generation FSSI, which focuses office supply Conducting spend analysis to spending to 15 strategically sourced contracts. Agencies and the federal strategic sourcing program generally continued to rely on the government’s current system for tracking contracting information data, FPDS-NG, and noted numerous deficiencies with this data for the purposes of conducting strategic sourcing research. Although we noted that some agencies had been able to identify some strategic sourcing opportunities despite flaws in the available data, the difficulty of obtaining reliable and detailed data on spending hindered their ability to assess which strategic sourcing opportunities offered the most potential benefits. Additionally, we have made recommendations in the past to improve government-wide contracting data systems, such as electronic submission of data and greater controls to help improve the accuracy and completeness of FPDS-NG. Agencies generally concurred with these recommendations and have taken actions to improve the system. We reported in our strategic sourcing report that most of the agencies we reviewed were challenged by a lack of leadership commitment to strategic sourcing, though improvements were under way. Leading companies we previously spoke with stated that the support and commitment of senior management is essential to facilitating companies’ efforts to re-engineer their approaches to acquisitions as well as to ensuring follow through with the strategic sourcing approach. However, we have found that leaders at some agencies were not dedicating the resources and providing the incentives that were necessary to build a strong foundation for strategic sourcing. In addition, a lack of clear guidance on metrics for measuring success had also impacted the management of ongoing FSSI efforts as well as most selected agencies’ efforts. For example, we found that agencies were challenged to produce utilization rates and other metrics— such as spending through strategic sourcing contracts and savings achieved—that could be used to monitor progress. Several agencies also mentioned a need for sustained leadership support and additional resources in order to more effectively monitor their ongoing initiatives. We recommended that the Secretaries of Defense and VA evaluate whether there are sufficient resources to fulfill strategic sourcing missions and develop metrics; the agencies concurred. Additionally, as we previously reported, agency officials noted that they have been reluctant to strategically source services (as opposed to goods) for a variety of reasons, such as difficulty in standardizing requirements or a decision to focus on less complex commodities that can demonstrate success. Agency officials also stated several disincentives that can discourage strategic sourcing efforts, such as a perception that reporting savings due to strategic sourcing could lead to program budgets being cut in subsequent years.leading companies stated they have focused their efforts on services, such as telecommunications and information technology services, over the past 5-7 years because of the growth in spending in that area, and have achieved significant savings. Leading companies employ more sophisticated strategic sourcing techniques, using spend analyses and in- depth market research to tailor their acquisition approaches to the complexity and availability of the particular good or service they are acquiring. An industry group surveyed companies and reported that companies are able to strategically buy the majority of their procurements, including services, in part because they targeted services that have been off-limits or controversial for most organizations, such as professional services. Professional services represented the federal government’s highest- spend service category and accounted for almost $50 billion of the federal procurement obligations in fiscal year 2012. For complex services, such as professional services, engineering, and research and development, agencies could apply company tactics to understand cost drivers and prequalify suppliers. Specifically, agencies could address knowledge- based services by using third-parties to benchmark supplier rates against comparable suppliers to ensure best price, develop new suppliers, and prioritize suppliers based on effectiveness and efficiency in order to ensure they are getting the best value. For less complex services, such as housekeeping and telecommunications, agencies could consolidate purchases to leverage buying power. Standardizing requirements could also help drive down costs. Leading companies reported that they applied this type of tactic for specialized maintenance and repair, specialized logistics, utilities, and certain types of security. Officials from leading companies also stated that there is not one right path for developing a strategic approach. However, as we previously discussed, leading companies’ foundational principles show that leveraging knowledge, developing services category strategies, and measuring success based on reducing costs and maximizing value are necessary steps. For example, leading companies reported beginning with different principles as they adopted a more strategic approach for purchasing services. Some began by conducting a spend analysis, while others began by implementing an enterprise-wide centralized procurement approach and setting savings goals. While their first steps may vary, agencies could gather enough knowledge to allow them to tailor their tactics to different types of services in order to achieve savings and maximize value. We have recommended that selected agencies and OMB take actions to increase the use of strategic sourcing. For example, in our 2012 strategic sourcing report, we recommended that the Secretaries of Defense and VA, and the Director of OMB take a series of detailed steps to improve strategic sourcing efforts. More specifically, we recommended that: the Secretary of Defense evaluate the need for additional guidance, resources, and strategies, and focus on DOD’s highest spending categories; the Secretary of VA evaluate strategic sourcing opportunities, including opportunities for VA’s highest spending categories, set goals, and establish metrics; and the Director of OMB issue updated government-wide guidance on calculating savings, establish metrics to measure progress towards goals, and identify spending categories most suitable for strategic sourcing. In commenting on the 2012 strategic sourcing report, DOD, VA, and OMB concurred with the recommendations and stated that they would take action to adopt them. In 2012, as part of establishing crosscutting goals to improve management across the federal government, OMB called for agencies to strategically source at least two new products or services in both 2013 and 2014 that yield at least 10 percent savings. In December 2012, OMB further directed certain agencies to reinforce senior leadership commitment by designating an official responsible for coordinating the agency’s strategic sourcing activities. In addition, OMB identified agencies that should take a leadership role on strategic sourcing. OMB called upon these agencies to lead government-wide strategic sourcing efforts by taking steps such as recommending management strategies for specific goods and services to ensure that the federal government receives the most favorable offer possible. Additionally, OMB directed these agencies to promote strategic sourcing practices inside their agencies by taking actions including collecting data on procurement spending. The memo also asks GSA to increase the transparency of prices paid for services that other agencies buy in order to inform market research and contract negotiations. improve the federal government’s access to detailed pricing information and visibility into spending. Improved visibility may also help the federal government better measure the success of its strategic sourcing initiatives. While it is too early to tell whether OMB’s actions will result in future savings, this initiative is a step in the right direction. OMB, Memorandum M-13-02, Improving Acquisition through Strategic Sourcing, (Washington, D.C.: Dec. 5, 2012). have devised strategies and tactics to manage sophisticated services. In addressing these categories, companies have shown that savings in service procurements come over a wide base. Also, such results need not require the creation of monolithic procurement organizations—these results can be achieved with leadership, shared data, and a focus on strategic categories that is dynamic rather than static. Clearly, the cost culture endemic to leading commercial practices is tied to the private sector’s focus on profits. In federal agencies, profit is not a motivator. And there are disincentives to identifying and pursuing new strategic sourcing opportunities, such as the perception that doing so could lead to unanticipated budget cuts. This could contribute to why federal agency efforts to manage the purchase of services strategically are limited to small, commodity-like segments of spending. Similarly, agency tactics tend to be slow-moving and static once put in place. As budgets decline, however, it is important that the cost culture in federal agencies change. The simple dynamic is that adopting leading commercial practices can enable agencies to provide more service for the same budget or the same service with a smaller budget. Because this report focuses on leading company practices rather than agency operations, we provided relevant sections of a draft of this report to the leading companies we interviewed. They generally agreed with our findings and provided technical comments, which were incorporated as appropriate. As agreed with your offices, unless you publically announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Defense; the Administrator for GSA; the Administrator for the Office of Federal Procurement Policy; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512- 4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. We were asked to identify practices used by large commercial organizations for purchasing services. Accordingly, we (1) assessed key practices used by leading companies in purchasing services, and (2) examined potential opportunities for federal agencies to incorporate these practices. To determine leading companies’ practices for acquiring services, we selected a nongeneralizable sample of companies based on a literature search and recommendations from experts. We conducted a literature search of industry-recognized companies that have had success with services acquisition practices, including reviewing our prior leading practices reports on services acquisition. We also met with the Defense Business Board, Defense Science Board, and industry experts to discuss their recent studies on services acquisition and to obtain recommendations on which leading companies to contact. Based on this approach, we identified and interviewed the following organizations: Seven companies: Boeing, Dell, Delphi, Humana, MasterCard, Pfizer, and Walmart. An industry group: Institute for Supply Management. A consulting organization: A.T. Kearney. Based on interviews with these organizations, we identified key practices reported by each company, including procurement organization structures, services procurement history and strategies, initiatives and resultant savings, and contracting methods. To maximize the applicability of our findings to the federal government, we identified top categories of services that the government acquires from the Federal Procurement Data System–Next Generation (FPDS-NG)—the government’s system for tracking contracting information—and interviewed companies about their practices in those categories. We compared companies’ procurement practices with those identified in our prior work. We identified common themes, including developing a transactional framework depicting our analysis of how companies tailor their procurement tactics, and confirmed with the companies. To assess the reliability of companies’ data on acquisition savings, we requested information on data quality control procedures and system safeguards from company officials. In addition we provided relevant sections of a draft of this report to companies for review and comment. We determined that the data were sufficiently reliable for the purposes of this report. To identify opportunities for federal agencies to adopt leading company practices, we determined that agencies purchase services similar to those that the selected leading companies purchase. Specifically, to compare purchased services, we identified the top services leading companies purchase through interviews and reviewed FPDS-NG data from fiscal years 2010 and 2012 to identify the top ten services purchased by the federal government. To assess the reliability of FPDS-NG, we reviewed existing documentation and electronically tested the data to identify obvious problems with completeness or accuracy. We determined that these data were sufficiently reliable for the purpose of reporting government-wide and agency spending on products and services. Additionally, to determine the federal government’s spending trend on services since fiscal year 2000, we relied on information we previously reported as well as FPDS-NG data between fiscal years 2008 and 2012. We reported then-year dollars for this analysis. To determine the extent to which the government plans to target its highest spend service categories, we reviewed Office of Management and Budget strategic sourcing initiatives, but did not assess the results of these initiatives. We also reviewed our previous reports related to federal strategic sourcing, acquisition, contract management, government streamlining, and duplication, overlap and fragmentation to identify (1) agency efforts to establish a strategic approach that reflected leading companies’ foundational principles; (2) procurement tactics that agencies have used to purchase a variety of services; and (3) challenges that agencies face when establishing a strategic approach. We also reviewed the Defense Business Board 2011 Report to the Secretary of Defense on Strategic Sourcing, as well as literature from industry sources on successful strategic sourcing efforts. We conducted this performance audit from December 2011 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Procurement Organization (Shared Services Group) Boeing’s centralized Shared Services Group is responsible for the global purchase of non-production goods and services. It is organized into Strategic Contracting and Category Management, Procurement Operations, and Procurement Support and Integration components. Beginning of Focus on Improvement to Services Procurement Boeing began implementing its current enterprise procurement model in 2006, with its Chief Executive Officer (CEO) subsequently making non- production procurement efficiency a priority. Visibility into Services Spend The company has one system that provides visibility into its services spend, called the Shared Service Procurement/Payables Network. The quality and quantity of data provided by this system allows for improved spend analysis. Overall Procurement Strategy Boeing’s enterprise procurement model aggregates demand by standardizing requirements in order to obtain pricing power. The company is currently implementing a Strategic Contracting and Category Management strategy with a focus on reducing total cost of ownership. This strategy involves category planning, strategic sourcing, contract management, and supplier relationship management. Examples of Procurement Tactics Boeing uses a four step process to ensure it is achieving the best value for each service: (1) define requirements; (2) define performance parameters; (3) define minimum acceptable performance standards; and (4) define service level agreements to measure success. Other tactics include standardizing requirements to reduce complexity, including benchmarking clauses in contracts to ensure competitive rates, and conducting business and market intelligence to help formulate future procurement strategies. Metrics Used to Manage Services Boeing manages supplier performance by establishing rating criteria with both the business partner and the supplier. An example of an internal metric is percentage goals for unit price reduction. Procurement Organization (Worldwide Procurement) Dell’s procurement operation is structured as a centralized model. They have a Chief Procurement Officer (CPO). Cost knowledge is shared throughout Dell with everyone having access to the same data source. Dell uses Global Category Managers (GCMs), who are responsible for knowing all the requirements, delivery needs, and contracting requirements for each service (or good). Under the GCM are Regional GCMs, responsible for knowing local markets. Visibility into Services Spend Dell has a centralized database and reporting process. Overall Procurement Strategy Each commodity (good or service) is managed by a unique commodity team that is tasked with developing detailed sourcing strategies. The commodity team conducts research in both the marketplace and the supplier base for a given commodity. Technological issues are considered in order to understand any risks and deficiencies in regards to how the commodity can best be utilized for programs and projects. Procurement scenarios are analyzed so that teams can evaluate how suppliers react to Dell’s purchasing needs. Examples of Procurement Tactics The ability and will to invest time and effort in capability management— highly trained employees, value creation process, and supplier relationship management—are the primary keys to procurement success. Teams continually aim to reduce costs by reducing complexity through reusing commodities for new products, bringing in outside expertise, and trusting commodity teams. Metrics Used to Manage Services Savings goals and metrics for success are determined from the bottom up and evaluated on a quarterly basis. Financial, operational, and organizational aspects of each commodity have their own set of performance metrics and means to measure improvement and success. This allows procurement officials to understand the “what” aspect that drives success. Procurement Organization (Global Supply Management) Delphi’s procurement organization is centralized, with a Senior Vice President of Global Supply Management as the equivalent of a CPO. For indirect spend, each of Delphi’s four global regions has a director with category managers for its major spend areas: Corporate Services, Facilities Management, Information Technology, Industrial Supplies, Materials Management, and Machinery and Equipment. Beginning of Focus on Improvement to Services Procurement Delphi centralized indirect procurement between 2005 and 2007 in order to improve efficiency and drive savings. Visibility into Services Spend Company tracks spending through a central database integrated with its financial systems, and monitors its savings through a separate, self- developed information technology tool, Indirect Material Cost Improvement Process. Overall Procurement Strategy Service categories are overseen by category team managers who utilize a three-level strategy for procurement: industry knowledge strategy, internal corporate strategy, and supply base strategy. These strategies are formally reviewed and documented annually. Decision criteria for determining service providers include price, total cost of ownership, company viability, company maturity, and management structure. Examples of Procurement Tactics Delphi maintains a Category Approved Supplier List for primary service suppliers, which is reviewed annually. Companies on this list provide goods and services in accordance with Delphi’s respective category strategies. The companies are approved by the Category Managers and Strategy Council of Delphi’s Supply Management Leadership. Delphi will also dual source certain services in order to minimize future risk. Metrics Used to Manage Services Cost, delivery, technology, quality, optimization of supply base, and localization to the region are the primary metrics used to measure services. Other performance indicators can include on-time delivery, interruptions, and safety. Suppliers are measured weekly or monthly, depending on the service, and review meetings are held at least quarterly. Savings are centrally tracked and only counted after validation by the financial department. Procurement Organization (Corporate Procurement) Humana‘s Corporate Procurement department is centralized with a CPO. Within Corporate Procurement category managers coordinate with internal customers to clarify business requirements, engage with suppliers to satisfy business needs, and develop strategic plans across its six procurement categories. While all purchases associated with government-related contracts are managed through Corporate Procurement, there are two additional procurement teams managing certain non-government-related supplier purchases within Humana: Global Sourcing, focused on business process outsourcing; and Information Technology Strategic Vendor Management, dedicated to software, non-commodity hardware, and technical consulting. Beginning of Focus on Improvement to Services Procurement In 2004, Humana hired a new CPO tasked with centralizing the overall Corporate Procurement process. The CPO has since developed spend analytics, results and process measures, and worked to value-driven annual goals and objectives. Category teams were introduced in 2004 yet did not mature to the full team structure, as it exists today, until 2007. Visibility into Services Spend Humana has an established centralized enterprise business suite, a fully integrated, comprehensive suite of business applications for the enterprise. These tools cover the procure-to-pay activity for supplier- related spend. Additionally, the team has an established centralized data source to capture key results and process measures. Overall Procurement Strategy Humana’s six-step procurement process is applied to the purchase of both goods and services. The procurement process is focused on obtaining best value for the enterprise through coordinated category planning and supplier relationship management. The six category teams formally review and refresh their category strategic plans annually, at a minimum, with updates through the course of the year in response to material business change. Examples of Procurement Tactics Humana conducts make-versus-buy analysis when determining the ideal source for services. If requirements cannot be met in-house, the company leverages purchases across the organization to create a fully informed view of requirements and a competitive, best value award. Humana relies on bench marking studies, past performance data, and prior customer references. Based on this information and approach, Humana is progressing toward establishing internal rate cards for consulting services to assess the value of proposed hourly rates based on contractor skill level. Humana adjusts the number of suppliers as needed. Humana steadily brought down the number of suppliers between 2004 and 2012. Currently, 4 percent of the supply base claims 80 percent of Humana’s spend. Humana is now engaging small, diverse, and emerging suppliers in order to achieve the right balance of supplier to spend. Metrics Used to Manage Services Humana is in the process of scoping out requirements to build a scorecard process to standardize the scoring of vendors’ performance, including keeping track of past performance. Inflation and deflation are measured based on year-over-year change in price on a per unit basis. Humana currently tracks cost avoidance and productivity—price and usage. Productivity is a key measure for executives and is incorporated into established performance measures. Procurement Organization (Global Supply Chain) The CPO has worked since 2009 to reduce silos within MasterCard’s business units, and established a centralized model with a category focus. Beginning of Focus on Improvement to Services Procurement Prior to the CPO’s arrival in 2009, MasterCard’s sourcing organization focused primarily on traditional procurement and tactical sourcing. The company recognized the value of advancing from a tactical procurement approach to a strategic one. Visibility into Services Spend MasterCard uses a centralized database for procurement operations, which provides transparency into their global spend. They may use other tools to track supplier performance, risk management, and to conduct e- sourcing. Overall Procurement Strategy MasterCard employs a category management strategy that is comprised of a six-step model: (1) analyze internal and external spend; (2) define requirements and develop strategy; (3) execute strategy; (4) negotiate and award contracts; (5) implement and manage contracts; and (6) manage supplier performance. Examples of Procurement Tactics MasterCard emphasizes the importance of understanding the nature of requirements. Some categories are more complex than others and the approach for each category differs. Procuring services in the contingent/temporary labor space is driven by competition and achieving process efficiencies; procuring services in the legal services space is driven by custom requirements and increased complexity. Metrics Used to Manage Services Cost savings are always important and MasterCard uses standard definitions to measure those savings, such as year-over-year change. Other metrics such as productivity savings and budget savings allow for innovation and investment. Procurement Organization (Global Procurement) Pfizer has “category teams” with a global reach for broad groupings of services with category managers assigned to each category. The global category zone leads are managed by Vice-Presidents who report directly to the head of the procurement organization and work with internal business partners, who execute contracts on a company-wide basis or locally as needed. Beginning of Focus on Improvement to Services Procurement Pfizer undertook a spend analysis effort in 2007, which revealed increased spending on legal, consulting, and financial services, as well as opportunities for improving the efficiency of their processes. Example of a Procurement Tactic Pfizer will examine market conditions and unbundle contracts for greater transparency and to bring more suppliers into the mix; later on Pfizer might bundle contracts to gain leverage as part of the strategy. Compliance Pfizer has a policy that procurement transactions over $100,000 must be competitively bid with limited, documented exceptions. Pfizer Procurement monitors compliance with company processes and billing guidelines. The company considers its procurement professionals as essentially risk managers rather than contract managers because they need to consider what is best for the company and how to minimize total cost of ownership while maintaining flexibility. Procurement Organization (Realty Procurement Services) Walmart’s procurement function is decentralized. Realty Procurement Services, led by a Realty Vice President, provides sourcing support for facilities maintenance which includes outside services such as snow removal, roofing, and parking lot maintenance. Realty Procurement Services provides complete procurement, project management, and sourcing support for Walmart capital projects. Beginning of Focus on Improvement to Services Procurement Focus on procurement improvement began in approximately 2008. However, the services spend is not fully leveraged as different divisions within Walmart procure services such as human resources, information technology, legal, and marketing separately. Visibility into Services Spend Walmart does not have one system that provides visibility into the services procurement spend. It utilizes one system as a contract bidding tool and is currently implementing another to provide increased spend visibility into maintenance services. Overall Procurement Strategy Walmart’s procurement strategy is focused on the reduction of total cost of ownership. While the lowest bidder may often be awarded a contract, it is important to take into account other considerations such as diversity and sustainability. Examples of Procurement Tactics Walmart employs craft managers for the major categories of services they acquire. These managers provide expert advice to the procurement organization. Walmart uses a performance management system that includes “score-carding” to rank suppliers based on various criteria. The company also applies a tiered pricing strategy, where a supplier offers different rates depending on the size of the contract. Metrics Used to Manage Services Average invoice price, hourly rate, time spent on location, and average time to complete a task are examples of the metrics Walmart uses to evaluate performance. In addition to the contact named above, W. William Russell, Assistant Director; Peter Anderson; Raj Chitikila; Laura Greifner; Julia Kennon; Amber N. Keyser; Stephen V. Marchesani; Jean McSween; Brian Mullins; Michael Palinkas; Sylvia Schatz; Roxanna Sun; Ann Marie Udale; Alyssa Weir; Sally Williamson; and Rebecca A. Wilson made key contributions to this report. Federal Contracting: Slow Start to Implementation of Justifications for 8(a) Sole-Source Contracts, GAO-13-118. Washington, D.C.: Dec. 12, 2012. Strategic Sourcing: Improved and Expanded Use Could Save Billions in Annual Procurement Costs, GAO-12-919. Washington, D.C.: Sep. 20, 2012. Defense Contracting: Competition for Services and Recent Initiatives to Increase Competitive Procurements, GAO-12-384. Washington, D.C.: Mar. 15, 2012. Strategic Sourcing: Office Supplies Pricing Study Had Limitations, but New Initiative Shows Potential for Savings, GAO-12-178. Washington, D.C.: Dec. 20, 2011. Federal Contracting: Observations on the Government’s Contracting Data Systems, GAO-09-1032T. Washington, D.C. Sept. 29, 2009. Defense Acquisition: Actions Needed to Ensure Value for Service Contracts, GAO-09-643T. Washington, D.C.: Apr. 23, 2009. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes, GAO-07-20. Washington, D.C: Nov. 9, 2006. Best Practices: Improved Knowledge of DOD Service Contracts Could Reveal Significant Savings, GAO-03-661. Washington, D.C.: Jun. 9. 2003. Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services, GAO-02-230. Washington, D.C: Jan. 18, 2002.
In fiscal year 2012, the federal government spent $307 billion to acquire services. The private sector is also reliant on services. Over the last 5-7 years, leading companies have been examining ways to manage their services in order to maximize returns and minimize inefficiencies. Given the amount of federal spending on services, GAO was asked to identify leading practices used by large commercial organizations for purchasing services. GAO identified (1) leading company practices for purchasing services, and (2) potential opportunities for federal agencies to incorporate these practices based on prior work. To determine leading companies' practices in this area, GAO selected a nongeneralizable sample of companies based upon a literature search and recommendations from Defense and industry organizations that have studied services acquisition. GAO identified and interviewed officials from seven companies, an industry group, and a consulting organization. To identify opportunities for agencies to adopt leading practices, GAO compared the types of services purchased by agencies in fiscal year 2012 with those purchased by companies. GAO also relied on prior, relevant work related to federal procurement of services and OMB initiatives for expanding agencies' use of strategic sourcing. Officials from leading companies GAO spoke with reported saving 4-15 percent over prior year spending through strategically sourcing the full range of services they buy--a process that moves away from numerous individual purchases to an aggregate approach. The federal government and leading companies buy many of the same services, such as facilities management, engineering, and information technology. Companies' keen analysis of spending, coupled with central management and knowledge sharing about the services they buy, is key to their savings. Their analysis of spending patterns can be described as comprising two essential variables: the complexity of the service and the number of suppliers for that service. Knowing these variables for any given service, companies tailor their tactics to fit the situation; they do not treat all services the same. Company tactics fall into four basic categories: (1) Standardize requirements, (2) Understand cost drivers, (3) Leverage scale, and (4) Prequalify suppliers. To illustrate how buying tactics are tailored, Walmart leverages its scale to compete basic or commodity services that have many suppliers, such as maintenance. When buying sophisticated services with few suppliers, such as consulting, Dell negotiates cost drivers such as labor rates. The framework is dynamic: over the long term, companies seek to reduce complexity and bring in additional suppliers to take advantage of market forces like competition. Federal agencies have sizable opportunities to leverage leading commercial practices to lower costs and maximize the value of the services they buy. In September 2012, GAO reported that large procurement agencies such as the Department of Defense and Veterans Affairs leveraged only a fraction of their buying power through strategic sourcing and faced challenges analyzing reliable data on spending, securing leadership support, and applying this approach to acquiring services. GAO recommended that these agencies and the Office of Management and Budget (OMB) issue guidance, develop metrics, and take other actions. The agencies and OMB concurred. OMB directed agencies to take actions to overcome these challenges. Potential savings are significant considering a savings rate of 4 percent applied to the $307 billion spent by federal agencies on services in fiscal year 2012 would equate to $12 billion. GAO has made recommendations in previous reports to help agencies strengthen strategic sourcing practices, which agencies concurred with and have planned actions under way.
The Department of Defense’s 2001 Defense Planning Guidance tasked the Department of the Navy to conduct a comprehensive review to assess the feasibility of fully integrating Navy and Marine Corps aviation force structure to achieve both effectiveness and efficiency. The Department of the Navy narrowed the study to include only fixed-wing tactical aviation assets because of affordability concerns. Specifically, Navy officials were concerned that the projected procurement budget would not be sufficient to buy as many F/A-18E/Fs and Joint Strike Fighter aircraft as originally planned. The difference between the funding needed to support the Navy’s original plan for procuring tactical aircraft and the Navy’s projected procurement budget is shown in figure 1. Figure 1 shows that, starting in fiscal year 2005, the Navy’s typical aviation allocation of $3,200 million per year would not be sufficient to support the previous procurement plan for F/A-18E/F and Joint Strike Fighter aircraft. In December 2001, the Chief of Naval Operations and the Commandant of the Marine Corps jointly commissioned a contractor to study the feasibility of integrating Naval tactical aviation. The study prompted a memorandum of agreement between the Navy and Marine Corps in August 2002 to integrate their tactical aviation assets and buy fewer aircraft than originally planned. The Plan proposes that the Navy and Marine Corps (1) merge operational concepts; (2) reduce the number of squadrons, aircraft per squadron, and backup aircraft; and (3) reduce the total number of aircraft to be procured in the future. The Department of the Navy anticipates that these changes will save approximately $28 billion in procurement costs over the next 18 years through fiscal year 2021. Operationally, the Navy and Marine Corps would increase the extent to which their tactical aviation units are used as a combined force for both services. Under the Plan, the Navy and Marine Corps would increase cross deployment of squadrons between the services and would further consolidate missions and operations through changes in aircrew training and the initiation of command-level officer exchanges. Under the Plan, the Marine Corps would increase the number of squadrons dedicated to carrier air wings, and the Navy would begin to dedicate squadrons to Marine Aircraft Wings. In 2003 the Marine Corps began to provide the Navy with the first of six additional dedicated squadrons to augment four squadrons already integrated into carrier air wings during the 1990s. As a result, each of the Navy’s 10 active carrier air wings would ultimately include one Marine Corps squadron by 2012. Concurrently, the Navy would integrate three dedicated squadrons into Marine Aircraft Wings by 2008, primarily to support the Marine Corps Unit Deployment Program rotations to Japan. The first Navy squadron to deploy in support of Marine Corps operations would occur in late fiscal year 2004, with other squadrons to follow in fiscal years 2007 and 2008. As part of the new operating concept, the Department of the Navy would satisfy both Navy and Marine Corps missions using either Navy or Marine Corps squadrons. Traditionally, the primary mission of Navy tactical aviation has been to provide long-range striking power from a carrier, while Marine Corps tactical aviation provided air support for ground forces. Navy and Marine Corps tactical aviation squadrons will retain their primary mission responsibilities, but units that integrate would additionally be responsible to train as well as perform required mission responsibilities of the other service. For example, if a Navy squadron were assigned to the Marine Corps Unit Deployment Program, its pilots would receive more emphasis on training for close air support missions, and, similarly, Marine Corps pilots would place more emphasis on long-range strike missions before deploying with a carrier air wing. Moreover, Navy and Marine Corps officers would exchange Command positions to further develop a more unified culture. For instance, a Marine Corps colonel would command a carrier air wing, while a Navy captain would command a Marine Corps Aircraft Group. As indicated in table 1, the Department of the Navy would create a smaller tactical aviation force structure consisting of fewer squadrons, reduced numbers of aircraft per squadron, and fewer backup aircraft. The number of tactical aviation squadrons would decrease from 68 under the previous plan to 59 by 2012. To achieve this reduction of nine squadrons, the department would cancel plans to reestablish four active Navy squadrons as anticipated under its prior procurement plan, decommission one Marine Corps Reserve squadron as well as one Navy Reserve squadron in 2004, and decommission three active Navy squadrons. The first active squadron is scheduled to be decommissioned in fiscal year 2006; two other squadrons are to be decommissioned from fiscal year 2010 through fiscal year 2012. Under the Plan, the number of aircraft assigned to some tactical aviation squadrons would be reduced. All Navy and Marine Corps F/A-18C squadrons that transition to the future Joint Strike Fighter aircraft would be reduced from 12 to 10 aircraft. In addition, Navy F/A-18F squadrons will be reduced from 14 to 12 aircraft. Furthermore, by 2006, aircraft assigned to the remaining two Navy and three Marine Corps Reserve squadrons would be reduced from 12 to 10. By reducing the aircraft assigned to squadrons, the size of Navy air wings will transition from 46 to 44 aircraft in 2004, as the Navy procures new aircraft. A notional air wing in the Navy’s current force is made up of 46 aircraft comprising a combination of F/A-18C and F-14 squadrons. However, by 2016, carrier air wings would contain 44 aircraft made up of two squadrons of 10 Joint Strike Fighters, one squadron of 12 F/A-18E fighters, and one squadron of 12 F/A-18F fighters. The Department of the Navy’s Plan would also reduce the number of backup aircraft to be procured from 745 (under the previous program) to 508, for a total reduction of 237 aircraft. Backup aircraft consist of those aircraft that are not primarily assigned to active or reserve squadrons. Specifically, backup aircraft are necessary to meet a variety of needs such as training new pilots; replacing aircraft that are either awaiting or undergoing depot-level repair; meeting research, development, and test and evaluation needs; attrition during peacetime or wartime operations; and meeting miscellaneous requirements, such as adversary training and the Blue Angels demonstration team. In implementing the Plan, the Department of the Navy expects to reduce the number of tactical aviation aircraft it will purchase by 497—from 1,637 to 1,140. As indicated in table 2, it plans to procure, respectively, 88 and 409 fewer F/A-18E/F and Joint Strike Fighter aircraft. Almost half (237, or 48 percent) of the expected reduction in aircraft procurement is attributable to the plan to have fewer backup aircraft. By reducing the total number of new tactical aviation aircraft to be procured, the Department of the Navy now expects that its new procurement program will cost about $64 billion, as compared with nearly $92 billion for the previously planned force, resulting in a savings of approximately $28 billion. The Department of the Navy based its conclusion that it could meet its operational requirements with a smaller force primarily on the results of a contractor study. The contractor’s analysis generally appeared reasonable because it assessed the relative capability of different tactical aviation force structures and included important assumptions about force structure, budget resources, and management efficiencies. However, from our review of the contractor’s methodology and assumptions, we identified some limitations in its analysis that may understate the risk associated with implementing some aspects of the Plan. These limitations include (1) the contractor’s decision to model only the carrier version of the Joint Strike Fighter despite the Marine Corps’ plans to operate Short Take Off and Vertical Landing aircraft on carriers, (2) the contractor’s limited studies supporting recommended reductions in backup aircraft, and (3) the contractor’s method for determining aircraft capabilities used in the force analyses. The contractor modeled the effectiveness of the current force, the larger force that the Navy had previously planned to buy, and the study’s recommended smaller force at three stages of a notional warfight. The warfight was based on a generic composite scenario that was developed with input from the Air Force and Army. It has been previously used by the Joint Strike Fighter Program Office to assess the effectiveness of a joint strike force in terms of phases of a warfight; geographical location of combat forces; the characteristics of targets, such as type and hardness; and whether targets are mobile. During the forward presence phase of the contractor’s modeling scenario, one carrier battle group and one amphibious readiness group were deployed, and aircraft operated at a maximum distance of 400 nautical miles from the carrier. In the buildup phase, three carrier battle groups and three amphibious groups were deployed in one theater, and aircraft operated at a maximum distance of 150 nautical miles. During the mature phase, eight carrier battle groups, eight amphibious readiness groups, and 75 percent of all other assets were deployed to land-based sites, and aircraft operated at a maximum distance of 150 nautical miles from the carrier. To measure combat effectiveness levels, the contractor methodically compared the estimated capabilities of the current force, the previously planned force, and the recommended force to hit targets and perform close air support. To determine the relative capabilities of each aircraft comprising these forces, the contractor convened a panel of experts who were familiar with planned capability and used official aircraft performance data to score the offensive and defensive capabilities of different aircraft across a range of missions performed during the three stages of the warfight. As indicated in figure 2, the experts determined that the Joint Strike Fighter, which is still in development, will be the most capable aircraft and assigned it a baseline score of 1 compared with the other aircraft. Figure 2 also shows that based on the capability scores assigned other aircraft, the Joint Strike Fighter is expected to be approximately nine times more capable than the AV-8B Harrier aircraft, about five times more capable than the F-14D and F/A-18 A+/C/D aircraft, three times more capable than the first version of the F/A-18 E/F aircraft, and 50 percent more capable than the second version of the F/A-18E/F. In addition, the contractor measured the percentage of units deployed in order to ensure that Navy and Marine Corps personnel tempo and operational tempo guidelines for peacetime were not exceeded. The study concluded that, because of the expected increase in the capabilities of F/A-18 E/F and the Joint Strike Fighter aircraft, both the previously planned force and the recommended new smaller force were more effective than today’s force. Furthermore, the new smaller force was just as effective in most instances as the previously planned force because the smaller force had enough aircraft to fully populate aircraft carrier flight decks and therefore did not cause a reduction in the number of targets that could be hit. However, the analysis showed that beginning in 2015, there would be a 10 percent reduction in effectiveness in close air support during the mature phase of a warfight because fewer squadrons and aircraft would be available to deploy to land bases. The analysis also showed that the smaller force stayed within personnel and operational tempo guidelines during peacetime. The contractor’s analysis was based on three key assumptions that generally appeared to be reasonable and consistent with DOD plans. First, it assumed that the future naval force structure would include 12 carrier battle groups, supported by 1 reserve and 10 active carrier air wings, and 12 amphibious readiness groups. The 2001 Quadrennial Defense Review validated this naval force structure and judged that this force structure presented moderate operational risk in implementing the defense strategy. Second, it assumed that the Navy and Marine Corps’ tactical aviation procurement budget would continue to be about $3.2 billion in fiscal year 2002 dollars annually through 2020. This was based on the Department of the Navy’s determination that the tactical aviation procurement budget would continue to represent about 50 percent of the services’ total aircraft procurement budget as it had in fiscal years 1995 to 2002. Third, it assumed that the Department of the Navy could reduce the number of backup aircraft it buys based on expected efficiencies in managing its backup aircraft inventory. Our analysis also showed, however, that certain limitations derived from the contractor’s study could add risk to the expected effectiveness of the future smaller force. These limitations are the study’s modeling assumption that the effectiveness of the Marine Corps’ Short Takeoff and Vertical Landing version of the Joint Strike Fighter would be the same as the Navy’s carrier version despite projected differences in their capability; the study’s assumption that certain efficiencies in the management of backup aircraft could be realized, without documenting and providing supporting analyses substantiating how they would be achieved; and the study’s process for assigning capability measures to aircraft which, because of its subjectivity, could result in an overestimation of the smaller force’s effectiveness. The contractor’s study assumed that all Joint Strike Fighters aboard Navy carriers, including those belonging to the Marine Corps, would have the performance characteristics of the carrier version of that aircraft. However, the Marine Corps plans to operate only the Short Takeoff and Vertical Landing version of the aircraft, which is projected to be significantly less capable than the carrier version in terms of range and payload (number of weapons it can carry). The Marine Corps believes this version is needed to satisfy its requirement to operate from austere land bases or amphibious ships in order to quickly support ground forces when needed. But the carrier version’s unrefueled range and internal payload are expected to exceed those of the Short Takeoff and Vertical Landing version by approximately 50 and 100 percent, respectively. The contractor mitigated the differences in the two versions’ capabilities by modeling a scenario whereby the aircraft would operate from carriers located 150 miles from the targets during the mature phase of the warfight—well within the range of the Marine Corps’ version. By contrast, during Operation Iraqi Freedom, many of the targets struck from carriers would have been outside the range of the Short Takeoff and Vertical Landing version of the aircraft unless in-flight refueling was performed, thereby reducing its effectiveness. The study noted that because of the differences in performance, substitution of the Short Takeoff and Vertical Landing version for the carrier version would result in decreased effectiveness when the Short Takeoff and Vertical Landing version’s performance parameters are exceeded. However, the study did not conduct additional analyses to quantify the impact of using Short Takeoff and Vertical Landing aircraft aboard carriers. Therefore, if the Plan is implemented whereby the Marine Corps operates the Short Takeoff and Vertical Landing version of the aircraft exclusively as one of four tactical aviation squadrons aboard each carrier, under a different scenario featuring a greater range to targets, the overall effectiveness of the tactical fighter group could be less than what the contractor’s study predicted. Navy officials acknowledged that operating the Short Takeoff and Vertical Landing Joint Strike Fighter aircraft from carriers presents a number of challenges that the Navy expects to address as the aircraft progresses through development. The contractor’s study recommended cutting 351 backup aircraft based on expected improvements and efficiencies in the Navy’s management of such aircraft. The study identified three main factors prompting its conclusion that fewer backup aircraft would be needed. Actual historical attrition rates for F/A-18 aircraft, according to the Navy, suggest that the attrition rate for the F/A-18E/F and Joint Strike Fighter could be lower than expected. The Navy determined that attrition might be only 1 percent of total aircraft inventory, rather than the expected 1.5 and 1.3 percent included in the Navy’s original procurement plan for the aircraft respectively; thus, fewer attrition aircraft would suffice. Business practices for managing aircraft in the maintenance pipeline could be improved. According to the contractor, if Navy depots performed as much maintenance per day as Air Force depots, it appears that the Navy could reduce the number of aircraft in the maintenance pipeline; thus, fewer aircraft could suffice. Testing, evaluating, and aircrew training could become more efficient. According to the contractor’s study, fewer aircraft would be needed to test and evaluate future technology improvements because of the Navy and Marine Corps’ two Joint Strike Fighter variants (the carrier and Short Takeoff and Vertical Landing versions would have many common parts). In addition, advances in trainer technology and the greater sortie generation capability of the newer aircraft could enable them to achieve more training objectives in a single flight; thus, fewer aircraft could suffice. Although the contractor recognized the potential of these efficiencies when recommending the reduction to the number of backup aircraft, it did not fully analyze the likelihood of achieving them. According to the contractor, it recommended the reduction based on limited analysis of the potential to reduce the number of attrition and maintenance pipeline aircraft. As a result, the contractor also recommended that the Department of the Navy study whether it could achieve expected maintenance efficiencies by improving its depot operations. However, the department has not conducted such an assessment. The Department of the Navy considered the risk of cutting 351 aircraft too high and instead decided to cut only 237 backup aircraft—the number reflected in the Navy’s plan. Historically, the Navy’s backup inventory has equaled approximately 95 percent of the number of combat aircraft. The contractor recommended that the Navy reduce its backup aircraft requirement to 62 percent of its planned inventory of combat aircraft. Concerned that this might be too drastic a cut, the Navy decided to use 80 percent when determining the number of backup aircraft in its Plan. Although the Plan’s higher ratio of backup aircraft to combat aircraft will reduce operational risk by having more aircraft available for attrition and other purposes, the Navy’s 80 percent factor was not based on a documented analysis. Navy officials noted that because of budget limitations, it would be difficult to purchase additional aircraft to support the smaller tactical aviation force in case some of the projected efficiencies are not realized. The contractor relied on aircraft capability scores assigned by a panel of experts as a basis for comparing the relative effectiveness of the aircraft and alternative force structures examined. The results showed that by 2020, the previously planned and new smaller force would be four times more effective at hitting targets than the current force. However, the panelists subjectively determined the capability scores from official aircraft performance parameters provided by the Navy. The contractor reportedly conducted a “sensitivity analysis” of the aircraft capability scores and found that changing the scores affected the forces’ relative effectiveness. Since the contractor did not retain documentation of the analysis, we could not verify the quality of the scoring, nor attest that the relative effectiveness of the new force will be four times greater than the current force as the study reported. Nevertheless, the contractor’s acknowledgement that score variations could affect relative force effectiveness raises the possibility that the estimated increases in effectiveness, both for the previously planned force and for the recommended smaller force, might not be as high as the study concluded. Navy and Marine Corps officials agreed that gaining a significant increase in total capability was key to accepting a smaller, more capable tactical aviation force. However, if the capability of the recommended smaller force is significantly less than that indicated by the study, the smaller force’s ability to meet both Navy and Marine Corps’ mission requirements could be adversely affected. The Navy and Marine Corps took significantly different approaches toward the task of assessing and documenting their decisions on which reserve units to decommission. The Marine Corps used a well-documented process that clearly showed what criteria were applied to arrive at its decision, whereas the Navy’s approach lacked clarity and supporting documentation about how different options were evaluated. DOD has not developed criteria to guide such decommissioning decisions. In a previous report, we reviewed the Air Force’s decision to reduce and consolidate the B-1B bomber fleet and found that Air Force officials did not complete a formal comprehensive analysis of potential basing options in order to determine whether they were choosing the most cost-effective units to keep. We also stated that in the absence of standard guidance for analyzing basing alternatives, similar problems could occur in the future. In this instance, the absence of standard DOD guidance for analyzing and documenting decommissioning alternatives allowed the Navy to use a very informal and less transparent process to determine which reserve squadron to decommission in fiscal year 2004. The lack of a formal process could also hinder transparency in making such decisions in the future, which adversely affects Congress’s ability to provide appropriate oversight. The Marine Corps established a team that conducted and documented a comprehensive review to support its decision about which Marine Corps Reserve squadron to decommission. In conducting its analysis, the Marine Corps assumed that (1) reserve assets that had not been decommissioned must be optimized for integration in future combat roles, (2) mission readiness and productivity are crucial, and (3) the political and legal ramifications of deactivating reserve units must be considered. The study team established a set of criteria consisting of personnel, operational, fiscal, logistical, and strategic factors and applied these criteria when evaluating each of the Marine Corps’ four reserve squadrons. Table 3 identifies the selection criteria applied to each squadron. The study results were presented to the Marine Requirements Oversight Council for review and recommendation and to the Commandant of the Marine Corps. The Commandant decided in May 2004 to decommission reserve squadron VMFA-321 located at Andrews Air Force Base, Maryland, by September 2004. In December 2003, the Navy decided to decommission one of three Navy Reserve tactical aviation squadrons, VFA-203, located in Atlanta, Georgia. The Chief of Naval Reserve stated that the Navy used a variety of criteria in deciding which unit to decommission. These criteria included the squadrons’ deployment history, the location of squadrons in relation to operating ranges, and the location of a reserve intermediate maintenance facility. Navy officials, however, could not provide documentation of the criteria or the analysis used to support its decision. Without such documentation to provide transparency to the Navy’s process, we could not determine whether these criteria were systematically applied to each reserve squadron. Furthermore, we could not assess whether the Navy had systematically evaluated and compared other factors such as operational, personnel, and financial impacts for all Navy Reserve squadrons. Two other factors could adversely affect the successful implementation of the Plan and increase the risk level assumed at the time the contractor completed the study and the Navy and Marine Corps accepted the Plan. These factors are (1) uncertainty about requirements for readiness funding to support the tactical aviation force and (2) projected delays in fielding the Joint Strike Fighter aircraft that might cause the Department of the Navy not to implement the Plan as early as expected and might increase operations and maintenance costs. If these factors are not appropriately addressed, the Department of the Navy may not have sufficient funding to support the readiness levels required for the smaller force to meet the Navy and Marine Corps’ missions, and the transition to the Plan’s force might be more costly than anticipated. The contractor’s study stated that because the Navy and the Marine Corps would have a combined smaller tactical aviation force under the Plan, the services’ readiness accounts must be fully funded to ensure that the aircraft readiness levels are adequate to meet the mission needs of both services. Furthermore, the contractor recommended that the Navy conduct an analysis to determine the future readiness funding requirements and ensure that the Navy has a mechanism in place to fully fund the readiness accounts. So far, the Navy has not conducted this analysis, nor has it addressed how it will ensure that the readiness accounts will be fully funded because Navy officials noted that they consider future budget estimates to be adequate. However, a recent Congressional Research Service evaluation of the Plan noted that operations and maintenance costs have been growing in recent years for old aircraft and that new aircraft have sometimes, if not often, proved more expensive to maintain than planned. Furthermore, our analysis of budget data for fiscal years 2001-3 indicates that the Department of the Navy’s operations and maintenance costs averaged about $388 million more than what was requested for tactical aviation and other flight operations. Without a review of future readiness funding requirements, the Navy cannot be certain that sufficient funding will be available to maintain the readiness levels that will enable the smaller tactical aviation force to meet the mission needs of both the Navy and the Marine Corps. Delays in fielding the Joint Strike Fighter aircraft, both known and potential, could also affect the successful implementation of the Plan. As a result of engineering and weight problems in the development of the Joint Strike Fighter, there will be at least a 1-year delay in when the Navy and Marine Corps had expected to begin receiving the Joint Strike Fighter aircraft. As noted in the Department of the Navy’s most recent acquisition reports to Congress, the Navy has delayed the Short Takeoff and Vertical Landing version from 2010 to 2012 and the Navy’s carrier version from 2012 to 2013. Furthermore, in March 2004 we reported that numerous program risks and possible schedule variances could cause additional delays. Recent Joint Strike Fighter program cost increases could also delay the fielding of the aircraft. In DOD’s December 31, 2003, procurement plan, the average unit cost of the aircraft increased from $69 million to $82 million. Assuming that the Department of the Navy procures the 680 Joint Strike Fighter aircraft as proposed under the Plan, the total procurement cost will be approximately $9 billion higher. This increase in cost, when considered within the limits of the expected $3.2 billion annual procurement budget, will likely prevent the Department of the Navy from fielding the smaller but more effective tactical aviation force as early as expected. Additionally, these delays will oblige the Department of the Navy to operate legacy aircraft longer than expected, which could result in increased operations and maintenance costs. A potential increase in operations and maintenance costs makes it even more important for the Department of the Navy to conduct an analysis to determine its future readiness funding requirements. The contractor’s study results provided the Department of the Navy with a reasonable basis for concluding that it could afford to buy a smaller but more capable force that would meet its future operating requirements by using fewer Navy and Marine Corps tactical aviation squadrons of more capable aircraft as a combined force and achieving efficiencies that allow it to reduce the number of backup aircraft needed. However, there are known management and funding risks to realizing the new smaller forces’ affordability and effectiveness. Until Navy management assesses the likelihood of future lower attrition rates and aircraft maintenance, test and evaluation, and training requirements, the Navy runs the risk that the number of backup aircraft it plans to procure will not be adequate to support the smaller tactical aviation force and add concern to the Plan’s affordability. Furthermore, in the absence of clear DOD guidance citing consistent criteria and documentation requirements for supporting decisions that affect units, such as which Navy and Marine Corps Reserve squadrons to decommission, we remain concerned about the transparency of the process for reducing the force to those with oversight responsibility. The inconsistency in the Marine Corps’ and Navy’s approaches and supporting documentation confirms the value of such guidance to ensure clear consideration of the best alternative. Finally, until the Department of the Navy knows the readiness funding requirements for operating the new smaller force, it cannot be certain that it can maintain the readiness levels required to meet operational demands. Such an assessment of these requirements would provide a sound basis for seeking proper funding. To enhance the potential that the future Navy and Marine Corps integrated tactical aviation force will meet the mission needs of both services and ensure more transparency when making future decommissioning decisions, we recommend that the Secretary of Defense take the following three actions: direct the Secretary of the Navy to thoroughly assess all of the factors that provide the basis for the number of backup aircraft needed to support a smaller tactical aviation force under the plan to integrate Navy and Marine Corps tactical aviation forces, develop guidance that (1) identifies the criteria and methodology for analyzing future decisions about which units to decommission and (2) establishes requirements for documenting the process used and analysis conducted, and direct the Secretary of the Navy to analyze future readiness funding requirements to support the tactical aviation integration plan and include required funding in future budget requests. In written comments on a draft of this report, the Director, Defense Systems, Office of the Under Secretary of Defense, stated that the department generally agreed with our recommendations and cited actions that it is taking. The department’s comments are reprinted in their entirety in appendix I. In partially concurring with our first recommendation to thoroughly assess all of the factors that provide the basis for the number of backup aircraft, DOD stated that the Department of the Navy’s Naval Air Systems Command would complete an effort to review all aircraft inventories to determine the optimum quantity required by July 2004. However, we were not able to evaluate the Navy’s study because Navy officials have since told us that it will not be completed until late September or early October 2004. With regard to our second recommendation to develop guidance that would identify criteria and a methodology for analyzing future decommissioning decisions and require documenting the process, DOD stated that it would change Directive 5410.10, which covers the notification of inactivation or decommission of forces, and require it to contain the criteria and methodology used to make the force structure decision. While we agree that the new guidance, if followed, would disclose these aspects of the decision-making process, it does not appear sufficient to meet the need we identified for consistency and documentation to support force structure decisions. Therefore, we believe that DOD should take additional steps to meet the intent of our recommendation by developing consistent criteria and requiring documentation to ensure transparency for those providing oversight of such decisions in the future. In partially concurring with our third recommendation related to future readiness funding requirements, the Department of Defense stated that the Department of the Navy is currently developing analytical metrics that would provide a better understanding of how to fund readiness accounts to achieve a target readiness level. We support the development of validated metrics that would link the amount of funding to readiness levels because they would provide decision makers with assurance that sufficient funding would be provided. To determine how Navy and Marine Corps operational concepts, force structure, and procurement costs would change under the Plan, we obtained information about the Navy and Marine Corps’ current roles and mission, force structure, and projected tactical aviation procurement programs and conducted a comparative analysis. We also met with Navy and Marine Corps officials at the headquarters and major command levels as well as Congressional Research Service officials to further understand and document the operational, force structure, and procurement cost changes expected if the Plan is implemented. To determine what methodology and assumptions the Navy and Marine Corps used to analyze the potential for integrating tactical aviation assets and any limitations that could affect the services’ analysis, we analyzed numerous aspects of the contractor’s study that provided the impetus for the Plan. Specifically, we met with the contractor officials of Whitney, Bradley & Brown, Inc., to gain first-hand knowledge of the model used to assess aircraft performance capability and the overall reasonableness of the study’s methodology. We also reviewed the scenario and assessed the key analytical assumptions used in order to evaluate their possible impact on the implementation of the Plan. We examined operational and aircraft performance factors to determine the potential limitations that could affect the services’ analysis. Additionally, we held discussions with officials at Navy and Marine Corps headquarters, Joint Forces Command, Naval Air Forces Pacific and Atlantic Commands, Marine Forces Atlantic Command, and the Air Combat Command to validate and clarify how the Plan would or would not affect the ability of tactical aviation forces to meet mission needs. To determine the process the Navy and Marine Corps used to assess which reserve squadrons should be decommissioned in fiscal year 2004, we obtained information from the Marine Corps Reserve Headquarters and the 4th Marine Air Wing showing a comparative analysis of Marine Corps Reserve squadrons. In the absence of comparable information from the Navy, we held discussions with the Chief of Naval Reserve and the Director, Navy Air Warfare, and visited the Naval Air Force Reserve Command to obtain information about the decision-making process for selecting the Navy reserve unit to be decommissioned. We also visited the Commander of the Navy Reserve Carrier Air Wing-20, along with four reserve squadrons, two each from the Navy and Marine Corps Reserves, to clarify and better understand their roles, missions, and overall value to the total force concept. To determine what other factors might affect the implementation of the Plan, we analyzed the contractor’s study, Congressional Research Service reports, and prior GAO reports for potential effects that were not considered in the final results of the analysis. We discussed these factors with officials from Navy and Marine Corps headquarters as well as Naval Air Forces Pacific and Atlantic Commands and Marine Forces Atlantic Command to assess the impact of the Plan on day-to-day operations. We assessed the reliability of pertinent data about aircraft capability, force structure, and military operations contained in the contractor’s study that supports the Plan by (1) reviewing with contractor officials the methodology used for the analysis; (2) reviewing the 2001 Quadrennial Defense Review, prior GAO reports, and service procurement and aircraft performance documents; and (3) conducting discussions with Navy and Marine Corps officials. We concluded that the data were sufficiently reliable for the purpose of this report. We performed our review from July 2003 through May 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Navy; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4402 if you or your staff have any questions concerning this report. Major contributors to this report are included in appendix I. In addition to those named above, Willie J. Cheely, Jr.; Kelly Baumgartner; W. William Russell, IV; Michael T. Dice; Cheryl A. Weissman; Katherine S. Lenane; and Monica L. Wolford also made significant contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Fiscal Year 2004 Defense Appropriations Act and the Senate Report for the 2004 National Defense Authorization Act mandated that GAO examine the Navy and Marine Corps' Tactical Aviation Integration Plan. In response to these mandates, this report addresses (1) how Navy and Marine Corps operational concepts, force structure, and procurement costs change; (2) the methodology and assumptions the services used to analyze the potential for integrating the forces; (3) the analytical process the services used to decide which reserve squadrons to decommission; and (4) other factors that might affect implementation of the Plan. Concerns about the affordability of their prior tactical aviation procurement plan prompted the Navy and Marine Corps to agree to a new Tactical Aviation Integration Plan. Under this Plan, the two services will perform their missions using fewer units of more capable aircraft and reducing total program aircraft procurement costs by $28 billion over the next 18 years. Operationally, the Navy and Marine Corps will increase the extent to which their tactical aviation units are used as a combined force to accomplish both services' missions. The Plan also reduces the services' tactical aviation force structure by decommissioning five squadrons, thus decreasing the number of Navy and Marine Corps squadrons to 59, and reduces the total number of aircraft they plan to buy from 1,637 to 1,140. The Department of the Navy based its conclusion that it could meet the Navy and Marine Corps' operational requirements with a smaller force primarily on the findings of a contractor study that evaluated the relative capability of different tactical aviation force structures. GAO's review of the contractor's methodology and assumptions about force structure, budget resources, and management efficiencies suggests that much of the analysis appears reasonable. However, GAO noted some limitations--including the lack of analytical support for reducing the number of backup aircraft--increase the risk that the smaller force will be less effective than expected. The Navy and Marine Corps each followed a different process in selecting a reserve squadron to decommission. The Marine Corps made a clear and well-documented analysis of the operational, fiscal, logistical, and personnel impacts of different options that appears to provide decision makers with a reasonable basis for selecting the Reserve unit to decommission. By contrast, the Navy selected its reserve squadron without clear criteria or a documented, comprehensive analysis, and thus with less transparency in its process. Two other factors that might affect successful implementation of the Plan are the potential unavailability of readiness funding and delays in fielding the new force. Although the contractor recommended that the Navy identify future readiness-funding requirements, to date, the Navy has not conducted this analysis. In addition, the Department of the Navy is experiencing engineering and weight problems in developing the Joint Strike Fighter that will cause it to be delayed until 2013, at least 1 year later than had been projected, and other high risks to the program remain. Because these delays will cause the Navy to operate legacy aircraft longer than expected, they might also increase operations and maintenance costs, making an analysis of future readiness funding requirements even more important.
Federal Medigap standards were first established by section 507 of the Social Security Disability Amendments of 1980 (P.L. 96-265), which added section 1882 to the Social Security Act (42 U.S.C. 1395 ss). Section 1882 set forth federal requirements that insurers must meet for marketing policies as supplements to Medicare and established criminal penalties for marketing abuses. As originally enacted, one of the requirements was that policies had to be expected to return specified portions of premiums as benefits—60 percent for policies sold to individuals and 75 percent for those sold to groups. Insurers were considered to have met the loss ratio requirement if their actuarial estimates showed that their policies were expected to do so. Actual loss ratios did not have to be compared with the loss ratio standards. At that time, insurers generally reported loss ratio data to the states in aggregate—that is, a combined total for all policies sold in the state. If states had wanted to verify compliance, this reporting method would not have allowed them to do so for particular policies. In 1986, we reported that section 1882 had helped protect against substandard and overpriced policies. We also pointed out the problem of insurers reporting aggregate loss ratio data and that actual loss ratios were not compared with the standards to verify compliance. Section 221 of the Medicare Catastrophic Coverage Act of 1988 (P.L. 100-360) amended section 1882 to require that insurers report their actual loss ratios to the states. The Omnibus Budget Reconciliation Act (OBRA) of 1990 (P.L. 101-508) required essentially that Medigap policies be standardized and that a maximum of 10 different benefit packages would be allowed. The act also increased the loss ratio standard for individual policies to 65 percent for policies sold or issued after November 5, 1991. Effective beginning in 1997, the 65-percent standard was applied to policies issued before November 6, 1991, by the Social Security Amendments of 1994 (P.L. 103-432). The 1990 amendments also required that insurers pay refunds or provide credits to policyholders when Medigap policies fail to meet loss ratio standards. As implemented in the NAIC model law and regulations, a cumulative 65-percent loss ratio for individual policies (75-percent for group policies) must be met over the life of a policy, which NAIC assumed to be 15 years. NAIC’s methodology compares a policy’s actual loss ratio for a given year with a benchmark (or target) ratio for that year, calculated using cumulative premium and claim experience. If a policy’s actual loss ratio does not meet the benchmark ratio, the insurer must complete further calculations to determine whether a refund or credit is necessary to bring the loss ratio up to standard. Loss ratios on a calendar-year basis for an individual policy are expected to be 40 percent the first year, 55 percent the second year, and 65 percent the third year. Annual loss ratios would continue to increase until they reach 77 percent by the 12th year and remain at that level for the remainder of the 15-year period. This approach anticipates that the higher loss ratios in the third and later years would offset the lower loss ratios in the first 2 years. The methodology is designed to ensure a cumulative 65-percent loss ratio for individual policies by the end of a 15-year period. This same approach is used for ensuring a 75-percent loss ratio by the end of a 15-year period for group policies. NAIC’s methodology for determining whether a refund or credit is required also includes a tolerance adjustment based on the number of policyholders and the length of time they have held their policies. A policy loss ratio based on less than 500 life-years of exposure since inception is considered not credible, and no refund or credit is required. After 10,000 life-years have accumulated, a policy is considered fully credible. According to an NAIC actuarial advisory group and several insurance regulators, this tolerance adjustment helps ensure that refunds or credits will not occur so often in the early years of policy experience that large premium increases will result in later years. An important factor in evaluating loss ratios is a policy’s credibility—that is, whether enough people have been covered under the policy to make the loss ratio meaningful. We used two measures of credibility. First, to make the data in this report comparable with the data in our earlier reports, we used a threshold of $150,000 in premiums in a given year in a state.Information in this report on loss ratios that includes years before 1994 use this measure. Second, we used a modification of NAIC’s refund methodology, which, as discussed above, measures credibility by the number of policyholders and the number of years they have held their policies. We used this method to assess whether policies met the applicable loss ratio standards in 1994 and 1995. Another factor considered when interpreting loss ratios is the length of time a policy has been in force. The refund methodology for 1994 and 1995 indicates that Medigap loss ratios are expected to meet the federal standard after 3 years, which is the criterion we used. In the 1988-95 period, the Medigap insurance market grew from about $7 billion to over $12 billion (see fig. 1), but most of that growth had occurred by 1992. From 1988 through 1992, earned premiums increased by more than 50 percent; from 1992 through 1995, growth leveled off with premiums averaging around $12 billion. In 1995, 352 insurance companies sold Medigap policies and collected premiums totaling $12.5 billion with 33 companies each reporting premiums of over $100 million and accounting for almost 75 percent of the total (see app. II). The Prudential Insurance Company of America, which underwrote the policies sold through the American Association of Retired Persons (AARP), was the largest supplier of Medigap insurance with 23 percent of the market. The average Medigap loss ratio for all policies was about the same in 1995 (86 percent) as it was in 1988 (84 percent), but average loss ratios exhibited considerable variation, increasing in some years and decreasing in others. For example, average loss ratios increased in 1990 and 1991 followed by 2 years of declining ratios and then 2 years of increases. For the 8-year period, the average loss ratio was 81 percent with a low of 76 percent in 1993 and a high of 86 percent in 1995. The average loss ratios for group policies have varied substantially, ranging from 80 percent in 1989 to 95 percent in 1995, while those for individual policies during the period have been more stable (see fig. 2). In 1995, states differed considerably in average loss ratios. Insurers doing business in Michigan had the highest average loss ratio (107 percent) followed by the District of Columbia (102 percent), Massachusetts (99 percent), Pennsylvania (97 percent), and Maine (96 percent). The five states with the lowest average loss ratios were Nebraska (73 percent), Minnesota (75 percent), Oregon (76 percent), Delaware (76 percent), and Montana (76 percent ). Appendix III lists average loss ratios by state. Moreover, loss ratios varied among insurers within a state. In Michigan, for example, average loss ratios for insurers with premiums over $150,000 ranged from 59.3 to 132.7, and, in Montana, from 29.0 to 108.8. In 1995, the average loss ratios for the 10 standardized Medigap plans— from the basic Plan A to the top of the line Plan J—ranged from 73.8 percent for Plan G, to 102.3 percent for Plan A. The most popular of the plans, Plan F, is more costly and returns less to policyholders in benefits than the nearly identical Plan C. Plan F differs from Plan C only in its coverage of excess physicians’ charges—the amounts doctors may bill patients above Medicare’s allowed amount, which the law limits to no more than 15 percent. In 1995, Plan F had a nationwide average loss ratio of 75.5 percent; Plan C had an average loss ratio of 89.3 percent. Medicare data show that for over 95 percent of claims, physicians agree to accept Medicare’s allowed amount so insurers seldom have to pay for excess charges. Moreover, Plan F had an average loss ratio in 1995 lower than all other plans except Plan G. Appendix IV lists the average loss ratio experience for all 10 Medigap plans in 1995 by state. In 1994 and 1995, most Medigap policies that were at least 3 years old with premiums totaling $150,000 or more in the applicable state met the federal loss ratio standards. Premiums on credible policies that had been issued 3 or more years ago that failed to meet the minimum federal loss ratio standards increased from $320 million in 1991 to $1.2 billion in 1993.However, premiums for policies that failed to meet the standards decreased to $937 million in 1994 and to $522 million in 1995 (see fig. 3). Using information not previously available in the NAIC loss ratio data tape, we incorporated features of the refund methodology to evaluate the 1994 and 1995 loss ratio data for policies that were at least 3 years old. To estimate the number of policies and associated premiums with loss ratios below standards, we measured credibility using the number of covered lives by policy reported to NAIC. Under the refund methodology, experience of less than 500 life-years is not considered credible, but 10,000 life-years is considered fully credible. A tolerance adjustment is added to the actual loss ratio on a sliding scale for life-years falling between those two numbers. In 1994, using covered lives as the measure of credibility, the actual or adjusted loss ratios of 256 of 2,670 policies did not meet the minimum loss ratio standards, and these companies earned $448 million in premiums on these policies. In 1995, the number of policies not meeting the standards was 141 or 4 percent of the total, and the premiums were $203 million. Appendixes V and VI identify the policies with loss ratios below the applicable standard, along with their premiums, benefit payments, and loss ratios. Appendix V lists individual policies, and appendix VI lists group policies. In both 1994 and 1995, more than 10,000 different Medigap policies, virtually all of which were standardized policies, were subject to the OBRA 1990 refund provision and were required to send refund calculation forms to state insurance commissioners. In those 2 years, a total of almost 14,000 policies had loss ratios below 65 percent for individual policies and below 75 percent for group policies. However, we identified only two policies that made refunds in 1995. One was a standardized policy sold in Iowa that refunded a total of about $19,000 to 148 policyholders. The other was a prestandardized plan sold in Virginia that refunded a total of about $2,000 to 76 policyholders. In follow-up contacts with 15 selected states, we identified only one policy sold in Illinois that refunded a total of about $123,000 to 3,075 policyholders for 1996. To determine why policies with loss ratios below the applicable standard in 1994 or 1995 did not have to make refunds, we selected a random sample of these policies with earned premiums under $1 million and asked the states, the District of Columbia, and Puerto Rico to send us copies of the refund calculation forms for the sample and for all policies with premiums over $1 million. All except Michigan responded. From the information on these forms, we determined the reasons refunds were not required and projected the results to the universe (see table 1). About 97 percent of the policies below the loss ratio standards had earned premiums of less than $1 million. Refunds were not required for most policies because their experience was not considered credible because they had less than 500 life-years since inception. Most of the policies with earned premiums of $1 million or more did not have to pay refunds because, although their loss ratios in 1994 or 1995 were below standards, their cumulative loss ratio since inception was greater than the benchmark ratio for the year in question. The benchmark ratios were designed with certain assumptions about policy lapse rates and other factors to ensure that the cumulative loss ratio over 15 years was at least equal to the federal loss ratio standards. Because benefit payments are generally low in the first years when policyholders are younger and healthier and increase as they age, benchmark ratios are significantly below the loss ratio standards at first and gradually increase over the years. Because all of the policies subject to the refund provision in 1994 and 1995 were issued within the last 3 or 4 years, they had benchmark ratios below loss ratio standards. In fact, in 1994 and 1995, the highest benchmark ratio for any policy was 58 percent; about 9 out of 10 policies had benchmark ratios under 50 percent. Millions of Medicare beneficiaries purchased Medigap policies, spending over $12 billion in 1995. Federal loss ratio standards and refund requirements are the main means of ensuring that Medigap policyholders receive value for their premium dollars. Medigap policies representing most of the premium dollars had loss ratios in 1994 and 1995 that were higher than federal law requires. Most policies with loss ratios below standards in 1994 and 1995 were not considered credible and, thus, were not subject to the refund provision. The amount of premiums paid for policies with loss ratios below standards has declined substantially from 1993, the last year before the refund provision became effective. The primary reason for requiring refunds and credits is to give insurers incentives to meet loss ratio standards and thereby avoid possibly unfavorable public relations consequences. The relatively low amount of premiums for policies with loss ratios below the standards indicates that the incentive is working. In commenting on a draft of this report, NAIC officials offered some technical suggestions, which we incorporated where appropriate. We are sending copies of this report to the governor of each state, NAIC, and interested congressional committees. We will make copies available to others on request. If you have any questions about this report, please call me at (202) 512-7114. Other major contributors to this report are listed in appendix VII. We obtained from the National Association of Insurance Commissioners (NAIC) its computerized database of insurance companies’ Medigap annual experience exhibits for 1994 and 1995, the latest available when we began our work. In 1994, earned premiums totaled $12.7 billion for all policies, and, in 1995, earned premiums totaled $12.5 billion. In the databases we identified policies issued after 1991 and therefore subject to the Omnibus Budget Reconciliation Act of 1990 refund provision. We then identified those policies with loss ratios below the federal loss ratio standards. These policies had earned premiums of about $1.3 billion in 1994 and $.7 billion in 1995. We did not test the accuracy of the 1994 database, but we did test the accuracy of the 1995 database and found it to be accurate. Moreover, our prior work has found these databases to be accurate. To determine why policies with loss ratios below standards were not required to refund premiums or credits, we randomly selected a sample of policies with earned premiums of less than $1 million and selected all those with premiums of $1 million or more from the NAIC 1994 and 1995 databases. We asked state insurance commissioners and those for the District of Columbia and Puerto Rico to provide us with copies of all refund calculation forms that insurance companies filed with them for the related policies. All except Michigan responded. However, for about one-third of the policies, we received no refund calculation forms because states could not locate or did not receive the forms or the forms had been purged from the files. The data in the columns of table 1 (on page 10) covering policies with earned premiums under $1 million represent projections of our sample to the universe of policies in NAIC’s databases for 1994 and 1995. Each estimate has a sampling error associated with it. The size of the sampling error reflects the precision of the estimate: The smaller the sampling error, the more precise the estimate. We computed sampling errors for table 1 at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the actual number being estimated falls within the range defined by our estimate, plus or minus the sampling error. Table I.1 shows the sampling errors for table 1. Prudential Insurance Company of America Bankers Life & Casualty Company Empire Blue Cross & Blue Shield Medical Service Association of Pennsylvania-Pennsylvania Blue Shield Blue Cross & Blue Shield of Florida Blue Cross & Blue Shield of Virginia Blue Cross & Blue Shield of North Carolina, Inc. Blue Cross & Blue Shield of New Jersey, Inc. Mutual of Omaha Insurance Company Anthem Insurance Companies, Inc. Blue Cross & Blue Shield of Michigan Blue Cross & Blue Shield of Alabama Blue Cross & Blue Shield of Tennessee Blue Cross & Blue Shield of Connecticut, Inc. Standard Life & Accident Insurance Co. Blue Cross & Blue Shield of Kansas, Inc. Blue Cross of Western Pennsylvania American Family Life Assurance Company of Columbus, Georgia State Farm Mutual Automobile Insurance Company Blue Cross & Blue Shield of Minnesota, Inc. Arkansas Blue Cross & Blue Shield Southeastern Group, Inc. (continued) States had alternate Medigap standardized programs in effect before the federal legislation standardizing Medigap was enacted and have waivers from this requirement. Actual or adjusted loss ratio(continued) Actual or adjusted loss ratio2CMO ET AL M4 ET AL (continued) Actual or adjusted loss ratio50277 (1-90) (continued) H(65) 9703(I); 9708,17,09(G) (continued) M169 ET AL M4 ET AL H(65) (continued) M115 ET AL M2 ET AL MC-86-1 MO1 SM 20/20 SMSP - 88 - 1 (continued) Actual or adjusted loss ratio(ED 4/84) 2CMO ET AL M154 ET AL (continued) GC500(D) GSC1667 VAP1008 VAP1030 VAP1030A VAP1030D 337,987 614,725 557,304 56,367 137,803 192,331 343,121 348,048 26,405 57,625 71.9 63.3 70.0 61.8 56.8 (A7-92) GC500(D) (continued) CB 44.7 CB 44.8 GB 10-A2.1 GB 10-A2.2 GC500(D) (continued) GC500(D) MSP) (continued) GC500(D) (continued) Actual or adjusted loss ratio(continued) GC500(D) ST-II(B)-1 GC500(D)TX (continued) Thomas G. Dowdal, Assistant Director, (202) 512-6588 William A. Hamilton, Evaluator-in-Charge Michael Piskai Wayne J. Turowski The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed insurers' compliance with Medigap loss ratios and standards, focusing on: (1) the overall Medigap market; (2) which Medigap policies had loss ratios below the standards in 1994 and 1995; and (3) which policies resulted in refunds or credits, or, if not, why. GAO noted that: (1) from 1988 through 1995, the Medigap insurance market grew from $7 billion to over $12 billion with most of the growth occurring before 1993; (2) during this 8-year period, loss ratios averaged 81 percent in 1995; (3) in 1994 and 1995, over 90 percent of the policies in force for 3 years or more, representing most of the premium dollars, met loss ratio standards; (4) premiums for policies with loss ratios below standards totalled $448 million in 1994 and $203 million in 1995; (5) loss ratios varied substantially among states, among different benefit packages, and among insurers; (6) although thousands of individual policy forms had loss ratios below standards, no refunds were required in 1994 and only two were required in 1995; (7) the refund provision did not apply because most of these policies' loss experience was based on too few policyholders to be considered credible under the National Association of Insurance Commissioners' (NAIC) refund calculation methodology; (8) a number of policies had a cumulative loss ratio--the factor used to measure compliance--above that required under NAIC's refund calculation method; and (9) a primary reason for requiring refunds was to give insurers an incentive for meeting loss ratio standards, and the high proportion of premium dollars for policies doing so indicates the incentive is working.
The Clean Air Act Amendments of 1990 required EPA to issue a series of new or stricter regulations to address some of the more serious air pollution problems, including acid rain, toxic air pollutants, motor vehicle emissions, and stratospheric ozone depletion. In view of the estimated billions of dollars in annual costs to implement these and other requirements, the Congress required EPA to report on the benefits and costs of the agency’s regulatory actions under the 1990 amendments, as well as under previous amendments to the act. Specifically, section 812 of the 1990 amendments required EPA to (1) conduct an analysis of the overall impacts of the Clean Air Act on public health, the economy, and the environment, (2) report on the estimated benefits and costs of the regulations implemented under clean air legislation enacted prior to 1990; and (3) biennially update its estimates of the benefits and costs of the Clean Air Act, beginning in November 1992. In May 1996, EPA drafted a report that examined the benefits and costs of the 1970 and 1977 amendments to the act. EPA is currently in the process of compiling its first prospective study evaluating the benefits and costs of the 1990 amendments. Section 812 also required GAO to report on the benefits and costs of the regulations issued to meet the requirements of the 1990 amendments. In a February 1994 report, we described the methodologies that EPA had used and the progress that the agency was making. In addition, since 1971 a series of executive orders and directives by OMB have required EPA and other federal agencies to consider the benefits and costs associated with individual regulations. In February 1981, President Reagan issued Executive Order 12291, which required federal agencies, including EPA, to prepare RIAs that identify the benefits, costs, and alternatives for all proposed and final major rules that the agencies issued. Subsequently, in September 1993, President Clinton issued Executive Order 12866 replacing Executive Order 12291 and directing federal agencies, including EPA, to assess benefits, costs, and alternatives for all economically significant regulatory actions. OMB and EPA have developed guidelines for conducting the benefit-cost analyses required by these executive orders. While describing the components to be included in these analyses, the guidance affords EPA’s program offices considerable flexibility in preparing RIAs. Specifically, EPA’s guidance stipulates that the level and precision of analyses in RIAs depend on the quality of the data, scientific understanding of the problem to be addressed through regulation, resource constraints, and the specific requirements of the authorizing legislation. This guidance also states that the amount of information and sophistication required in benefit-cost analyses depend on the importance and complexity of the issue being considered. The recently enacted Small Business Regulatory Enforcement Fairness Act of 1996 provides that before a rule can take effect, the agency preparing it must submit to GAO and make available to the Congress, among other things, a complete copy of any cost-benefit analysis of the rule. This act also provides for congressional review of major rules issued by federal agencies, including EPA, and the potential disapproval of such rules by the enactment of a joint resolution. Eight of the RIAs that we examined did not clearly identify the values assigned for key economic assumptions, such as the discount rate and value of human life, used to assess the economic viability of the regulations. Furthermore, we found that in the RIAs that identified key economic assumptions the rationale for the values used was not always explained. While EPA’s guidance suggests that RIAs account for uncertainties in such values by conducting sensitivity analyses that show how benefit-cost estimates vary depending on what values are assumed, many RIAs used only a single value and did not always provide a clear explanation. Appendix I summarizes the results of our examination of the economic assumptions used in the 23 RIAs. Five of the 23 RIAs did not indicate whether the estimated future benefits and costs were discounted. The discount rate can have a significant effect on the estimated impact of an environmental regulation. For example, most environmental regulations impose immediate costs, while the benefits are realized in the future. In such a case, a lower discount rate has a more positive effect on future benefits, thus enhancing the regulation’s perceived value. Conversely, using a higher discount rate makes benefits that occur in the future appear less valuable. Not clearly indicating the discount rate used in benefit-cost analyses makes it more difficult for decisionmakers to assess the desirability of a proposed regulation. EPA’s guidelines recognize that there may be uncertainties about which discount rates should be used. Moreover, EPA’s Director of the Office of Economy and Environment stated that there are uncertainties associated with choosing discount rates for conducting benefit-cost analyses. As a result, EPA’s guidance suggests the use of sensitivity analyses to show how benefit and cost estimates are affected by different discount rates. Of the 18 RIAs that clearly identified the discount rates used, 5 showed the sensitivity of their estimates to different rates ranging from 2 to 10 percent. Thirteen of the RIAs used a single rate. Although 14 RIAs indicated that the reduction in mortality was an expected benefit, five did not indicate the value placed on a human life. Of the nine RIAs that indicated the value placed on a human life, eight included sensitivity analyses to indicate how their benefit estimates were affected by different values assumed for a life. Assigning a relatively high value for human life can have a significant positive effect on estimated benefits. However, for the nine RIAs that assumed a value for a human life, the ranges used were not always explained. For example, one RIA assumed a value of human life that ranged from $1.6 million to $8.5 million, and another, prepared in the same year, assumed a value of human life that ranged from $3 million to $12 million. In both instances, the RIAs did not provide a clear explanation of the rationale for the values that were used. Because of the agency’s concern about the use of different values for key assumptions and the extent to which sensitivity analyses were used to account for uncertainties about the appropriate values for these assumptions, EPA recently formed an Economic Analysis Consistency Task Group under the direction of the Regulatory Policy Council to develop information on the causes of inconsistencies in the agency’s RIAs. The Council is chaired by EPA’s Deputy Administrator. In addition, EPA officials explained that the authorizing legislation for some environmental regulations is often a key determinant in the thoroughness of the agency’s benefit-cost analyses. For example, they said that health-based national ambient air quality standards issued by the agency are not based on costs or other economic considerations. However, costs may be considered when developing and implementing control strategies for these standards. Although benefit-cost analyses are completed for these regulations, they do not directly impact the regulatory decision-making process. Therefore, the level of analysis and the number of alternatives analyzed could be more limited. Time constraints imposed by statutory and court-ordered deadlines and shortages of resources and staff also restrict EPA’s ability to conduct comprehensive benefit-cost analyses. Given the limited resources and staff available for completing economic analyses, EPA officials stated that they assign a higher priority to benefit-cost analyses supporting regulations facing imminent deadlines, regulations expected to have greater economic impacts on society, and those for which the economic analysis has the highest potential to affect the regulatory alternative selected. OMB’s and EPA’s guidelines encourage EPA to quantify, to the extent feasible, all potential regulatory benefits and costs in monetary terms, but the guidance recognizes that assigning reliable monetary values to some benefits may be difficult, if not impossible. When benefits and costs cannot be described in monetary terms, the guidance recommends that RIAs include quantitative and qualitative information on the benefits and costs associated with the proposed regulations. The benefits mentioned in the guidance include reduced mortality, reduced morbidity, improved agricultural production, reduced damage to buildings and structures, improved recreational environments, improved aesthetics, and improvements in ecosystems. Our review of the 23 RIAs indicated that while all of them assigned dollar values to the costs of proposed regulations, 11 assigned dollar values to estimated benefits. EPA acknowledges that assigning monetary values to projected benefits is more difficult than assigning values to the costs of regulatory actions. According to EPA officials, the uncertainty of the science and inadequacy of other data often prevent the agency from estimating dollar benefits. For example, EPA’s guidance recognizes that assigning a monetary value to reduced health risks, a potentially significant benefit, is difficult because of uncertainties about the precise relationship between different pollution levels and corresponding health effects and the appropriate monetary values to be assigned to reductions in mortality and reduced risks of individuals’ experiencing serious illnesses. Estimating the monetary value of improvements in ecosystems, another potentially significant benefit, is even more complex. Although some RIAs did not assign dollar values to benefits, all 23 of the RIAs we examined contained other quantitative or qualitative information on the benefits of the proposed regulations. When benefits cannot be assigned dollar values, quantifying the benefits, such as a reduced incidence of deaths and illnesses, helps clarify the impact of proposed regulations. For example, an RIA for the National Recycling and Emissions Reduction Program’s regulation estimated that 76,500 fewer cases of skin cancer and 1,400 fewer deaths from skin cancer would occur because of the regulation. Qualitative information is also helpful to decisionmakers because it gives them a more complete understanding of the overall benefits of regulations. Nineteen of the RIAs discussed qualitative benefits, such as increased crop yields, improvements in ecosystems, and reduced damage to buildings and other structures. Recognizing the difficulties associated with assigning dollar values to benefits, EPA’s guidelines state that cost-effectiveness analyses can assist decisionmakers in comparing the desirability of various regulatory alternatives. We found that 20 of the RIAs we examined included the results of cost-effectiveness analyses, such as the cost per ton of reduced emissions. OMB’s and EPA’s guidelines require EPA to identify and discuss in RIAs the regulatory and nonregulatory alternatives for mitigating or eliminating the environmental problems being addressed and to provide the reasoning for selecting the proposed regulatory action over other alternatives. While EPA’s guidance recommends that RIAs consider four major types of alternatives—voluntary actions, market-oriented approaches, regulatory approaches within the scope of the authorizing legislation, and regulatory actions initiated through other legislative authority—it states that the number and choice of alternatives to be selected for detailed benefit-cost analyses is a matter of judgment. While it was not always clear how many alternatives or what types of alternatives were considered, our examination of the 23 RIAs indicated that 6 of them compared a single alternative, which was the regulatory action being proposed, to the baseline, which was the situation likely to occur in the absence of the regulation—the status quo. All other RIAs compared two or more alternatives to the baseline. Figure 1 shows the results of our examination of the number of alternatives that EPA considered in the 23 RIAs. A major goal of RIAs is to develop and organize information on benefits and costs to clarify trade-offs among alternatives. EPA’s guidance states that RIAs should provide decisionmakers with a comprehensive assessment of the implications of alternatives. EPA officials acknowledged that it is not always clear in the RIAs which alternatives were actually analyzed. They stated that some alternatives are excluded before the benefit-cost analyses are performed because of noneconomic reasons, such as statutory language that precludes EPA from using certain approaches. In our 1984 report, we recommended that future RIAs prominently include executive summaries that (1) clearly recognize all benefits and costs, even those that cannot be quantified; (2) identify a range of values for benefits and costs subject to uncertainty, as well as the sources of uncertainty; and (3) compare all feasible alternatives. While 13 of the 23 RIAs that we examined included executive summaries, some of these RIAs only briefly discussed the types of information that we recommended they contain. For example, the executive summary for the RIA on the regulation for national emissions standards for coke ovens contained a limited discussion of the uncertainties underlying the analysis, and the executive summary for the RIA on the operating permits program’s regulation included only two sentences on the three alternatives that EPA considered. In contrast, the executive summary for the RIA supporting the regulation on phasing out ozone-depleting chemicals presented a relatively thorough discussion of the results of the benefit-cost analysis. For example, it included a range of cost estimates, qualitative and quantitative benefit estimates, discussions of scientific and economic uncertainties, and estimated benefits and costs for baseline conditions and three alternatives. The prominent display of this type of information in the executive summary makes it easier for decisionmakers to locate the information they need without searching through hundreds of pages in the body of the RIAs. EPA officials acknowledged that some of the RIAs did not include executive summaries and agreed that executive summaries that include information such as descriptions of the difficulties in assigning dollar values to benefits, uncertainties of the data, and regulatory alternatives are useful. However, they stated that time constraints and limited resources and staff often determine whether they prepare executive summaries and the amount of detail that is included when summaries are done. We believe that improvements in the presentation and clarity of information contained in EPA’s RIAs would enhance their value to both agency decisionmakers and the Congress in assessing the benefits and costs of proposed regulations. EPA’s guidelines state that the goal of RIAs is to provide decisionmakers with well-organized, easily-understood information on the benefits and costs of major regulations and to provide decisionmakers with a comprehensive assessment of the implications of alternative regulatory actions. However, many of the RIAs we reviewed did not clearly identify key economic assumptions, the rationale for using these assumptions, the degree of uncertainty associated with both the data and the assumptions used, or the alternatives considered. Not clearly displaying this information makes it difficult for decisionmakers and the Congress to appreciate the range and significance of the benefit and cost estimates presented in these documents. To help EPA decisionmakers and the Congress better understand the implications of proposed regulatory actions, we recommend that the EPA Administrator, ensure that RIAs identify the (1) value, or range of values, assigned to key assumptions, along with the rationale for the values selected; (2) sensitivity of benefit and cost estimates when there are major sources of uncertainty; and (3) alternatives considered, including those not subjected to benefit-cost analyses. We provided a draft of this report to EPA and OMB for review and comment. We obtained comments from EPA officials, including the Director, Office of Economy and Environment, and representatives of the Office of Air and Radiation. EPA officials stated that the information in the report was accurate and agreed with the recommendations in the report. They provided specific comments on a number of issues, which we have incorporated into the report, including a clarification of the objectives of the Economic Analysis Consistency Task Group. According to EPA officials, this group is in the process of identifying key issues associated with benefit-cost analyses that offer the potential for greater consistency in the agency’s RIAs. Among the issues being considered are the valuation of reductions in the risk of mortality, discount rates and baselines, intergenerational issues, and distribution effects. Additionally, they emphasized that greater consistency in addressing key issues in the RIAs would enhance their usefulness for EPA’s decisionmakers. EPA views this as an ongoing process and anticipates that it will result in revisions to the agency’s guidelines for preparing economic analyses. OMB did not provide comments on the draft report. We conducted our work from February 1996 through February 1997 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is contained in appendix II. We are sending copies of this report to the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. Copies are also available to others on request. Please call me at (202) 512-4907 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Discount rates (percent) Value of life (dollars in millions) RIA for the National Recycling and Emission Reduction Program (1) RIA for the National Recycling and Emission Reduction Program (2) RIA of Nitrogen Oxides Regulations—1993 (continued) Discount rates (percent) Value of life (dollars in millions) These are real discount rates, which exclude the effects of inflation. Nine of these RIAs did not identify reduced mortality as a benefit associated with a proposed regulation. Therefore, assigning a monetary value for a human life was not applicable. We examined 23 RIAs issued by the Office of Air and Radiation between November 1990, the effective date of the Clean Air Act Amendments of 1990, and December 1995. Eighteen of these RIAs supported regulations that were estimated to cost $100 million or more annually and therefore were considered economically significant. Five RIAs supported regulations that were considered major or significant by the Environmental Protection Agency (EPA) because of their potential impact on costs and prices for consumers, the international competitive position of U.S. firms, or the national energy strategy or because they were statutorily required by the 1990 amendments. To determine the number of the RIAs, we interviewed officials from EPA’s Office of Policy, Planning, and Evaluation and Office of Air and Radiation, which has four program offices—the offices of Air Quality Planning and Standards, Mobile Sources, Atmospheric Programs, and Radiation and Indoor Air—and examined EPA’s database of completed RIAs. Although EPA’s other program offices are also responsible for preparing RIAs, we limited our review to the RIAs prepared by the Office of Air and Radiation because this office is primarily responsible for implementing the requirements of the 1990 amendments. We reviewed Executive Orders 12866 and 12291 and EPA’s and the Office of Management and Budget’s guidance on the preparation of RIAs under these executive orders. From those documents, we identified the key components of RIAs and reviewed the 23 selected RIAs for their handling of these components. We also discussed issues affecting the clarity of RIAs with officials of the Office of Air and Radiation and Office of Policy, Planning, and Evaluation. William F. McGee, Assistant Director Charles W. Bausell, Jr., Adviser Harry C. Everett, Evaluator-in-Charge Kellie O. Schachle, Evaluator Kathryn D. Snavely, Evaluator Joseph L. Turlington, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Environmental Protection Agency's (EPA) 23 regulatory impact analyses (RIA) supporting air quality regulations, focusing on whether the RIAs clearly describe: (1) key economic assumptions subject to uncertainty and the sensitivity of the results to these assumptions; (2) the extent to which benefits and costs were quantified for the proposed regulatory action; and (3) the extent to which alternative approaches were considered. GAO noted that: (1) while certain key economic assumptions, such as the discount rate and the value of human life, can have a significant impact on the results of benefit-cost analyses and are important to the regulations being proposed, eight of the RIAs did not identify one or more of these assumptions; (2) furthermore, in the RIAs that identified key economic assumptions, the rationale for the values used was not always explained; (3) for example, one RIA assumed a value of life that ranged from $1.6 million to $8.5 million and another, prepared in the same year, assumed a value of life that ranged from $3 million to $12 million; (4) in neither instance did the RIAs provide a clear explanation of the rationale for the values that were selected; (5) even though EPA's guidance suggests that RIAs account for any uncertainties in the values of key assumptions by conducting sensitivity analyses, which show how benefit and cost estimates vary depending on what values are assumed, 13 RIAs used only a single discount rate; (6) all 23 RIAs assigned dollar values to the estimated costs of proposed regulations, however, 11 of the RIAs assigned dollar values to the estimated benefits; (7) according to EPA officials, assigning dollar values to potential benefits is difficult because of the uncertainty of scientific data and the lack of market data on some of these effects; (8) all of the RIAs contained some quantitative or qualitative information on the expected benefits, such as a reduced incidence of mortality and illness; (9) while the number and the types of alternatives considered in the 23 RIAs were not always clear, GAO's examination indicated that six of the RIAs compared a single alternative, which was the regulatory action being proposed, to the baseline, which was the situation likely to occur in the absence of regulation, the status quo; (10) the remainder compared two or more alternatives to the baseline; (11) resource constraints and the specific requirements of authorizing legislation, which sometimes limits EPA's options, were factors influencing the extent to which alternatives were considered; (12) ten of the RIAs GAO examined did not include executive summaries, even though these summaries can be a significant benefit to decisionmakers; and (13) EPA officials acknowledged that some of the RIAs did not include executive summaries and agreed that executive summaries, by providing easily accessible information, can be useful to decisionmakers.
OMB and Treasury have established a new Data Standards Committee that will be responsible for maintaining established standards and developing new data elements or data definitions that could affect more than one functional community (e.g., financial management, financial assistance, and procurement). Although this represents progress in responding to GAO’s prior recommendation, more remains to be done to establish a data governance structure that is consistent with leading practices to ensure the integrity of data standards over time. Several data governance models exist that could inform OMB’s and Treasury’s efforts. Many of these models promote a common set of key practices that include establishing clear policies and procedures for developing, managing, and enforcing data standards. A common set of key practices endorsed by standard-setting organizations recommends that data governance structures include the key practices shown in the text box below. We have shared these key practices with OMB and Treasury. 1. Developing and approving data standards. 2. Managing, controlling, monitoring, and enforcing consistent application of data standards. 3. Making decisions about changes to existing data standards and resolving conflicts related to the application of data standards. 4. Obtaining input from stakeholders and involving them in key decisions, as appropriate. 5. Delineating roles and responsibilities for decision-making and accountability, including roles and responsibilities for stakeholder input on key decisions. A robust, institutionalized data governance structure is important to provide consistent data management during times of change and transition. The transition to a new administration presents risks to implementing the DATA Act, including potential shifted priorities or loss of momentum. The lack of a robust and institutionalized data governance structure for managing efforts going forward presents additional risks regarding the ability of agencies to meet their statutory deadlines in the event that priorities shift over time. In June 2016, OMB directed the 24 CFO Act agencies to update the initial DATA Act implementation plans that they submitted in response to OMB’s May 2015 request. In reviewing the 24 CFO Act agencies’ August 2016 implementation plan updates, we found that 19 of the 24 CFO Act agencies continue to face challenges implementing the DATA Act. We identified four overarching categories of challenges reported by these agencies that may impede their ability to effectively and efficiently implement the DATA Act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. To address these challenges, most agencies reported taking mitigating actions, such as making changes to internal policies and procedures, leveraging existing resources, utilizing external resources, and employing manual and temporary workarounds. However, the information reported by the CFO Act agencies in their implementation plan updates indicates that some agencies are at increased risk of not meeting the May 2017 reporting deadline because of these challenges. In addition, inspectors general for some agencies, such as the Departments of Labor and Housing and Urban Development, have issued readiness review reports indicating that their respective agencies are at risk of not meeting the reporting deadline. As discussed further below, the technical software requirements for agency reporting are still evolving, so any changes to the technical requirements over the next few months could also affect agencies’ ability to meet the reporting deadline. In August 2016, in response to a prior GAO recommendation, OMB established procedures for reviewing and using agency implementation plan updates that include procedures for identifying ongoing challenges. According to the procedures document, OMB will also be monitoring progress toward the statutory deadline and setting up meetings with any of the 24 CFO Act agencies that OMB identifies as being at risk of not meeting the implementation deadline. In May 2016, in response to a prior GAO recommendation, OMB released additional guidance on reporting financial and award information required under the act to address potential clarity, consistency, and quality issues with the definitions of standardized data elements. While OMB’s additional guidance addresses some of the limitations we have previously identified, it does not address all of the clarity issues. For example, we found that this policy guidance does not address the underlying source that can be used to verify the accuracy of non-financial procurement data or any source for data on assistance awards. In addition, in their implementation plan updates, 11 of the 24 CFO Act agencies reported ongoing challenges related to the timely issuance of, and ongoing changes to, OMB policy and Treasury guidance. Eight agencies reported that if policy or technical guidance continues to evolve or be delayed, the agencies’ ability to comply with the May 2017 reporting deadline could be affected. In August 2016, OMB released additional draft guidance on how agencies should report financial information involving specific transactions, such as intragovernmental transfers, and how agency senior accountable officials should provide quality assurances for submitted data. OMB staff told us that this most recent policy guidance was drafted in response to questions and concerns reported by agencies in their implementation plan updates and in meetings with senior OMB and Treasury officials intended to assess agency implementation status. OMB staff told us that they received feedback from 30 different agencies and reviewed over 200 comments on the draft guidance. The final guidance was issued on November 4, 2016. Although OMB has made some progress with these efforts, other data definitions lack clarity which still needs to be addressed to ensure that agencies report consistent and comparable data. These challenges, as well as the challenges identified by agencies, underscore the need for OMB and Treasury to fully address our prior recommendation to provide agencies with additional guidance to address potential clarity issues. We also noted in our report being released today that the late release of the schema version 1.0 may pose risks for implementation delays at some agencies. The schema version 1.0, released by Treasury on April 29, 2016, is intended to standardize the way financial assistance awards, contracts, and other financial data will be collected and reported under the DATA Act. A key component of the reporting framework laid out in the schema version 1.0 is the DATA Act Broker, a system to standardize data formatting and assist reporting agencies in validating their data prior to submitting them to Treasury. Treasury has been iteratively testing and developing the broker using what Treasury describes as an agile development process. On September 30, 2016, Treasury updated its version of the broker, which it stated was fully capable of performing the key functions of extracting and validating agency data. Treasury officials told us that although they plan to continue to refine the broker to improve its functionality and overall user experience, they have no plans to alter these key functions. Agencies have reported making progress creating their data submissions and testing them in the broker, but work remains to be done before actual reporting can begin. Some agencies reported in their implementation plan updates that they developed plans for interim solutions to construct these files until vendor-supplied software patches can be developed, tested, and configured that will extract data to help their clients develop files that comply with DATA Act requirements. However, some of these interim solutions rely on manual processing, which can be burdensome and increase the risk for errors. The Section 5 Pilot is designed to develop recommendations to reduce the reporting burden for federal funds recipients. It has two primary focus areas: federal grants and federal contracts (procurements). OMB partnered with the Department of Health and Human Services to design and implement the grants portion of the pilot and with the General Services Administration to implement the procurement portion. Our review of the revised design for both the grants and procurement portions of the pilot found that they partly met each of the leading practices for effective pilot design (shown in the text box below). 1. Establish well-defined, appropriate, clear, and measurable objectives. 2. Clearly articulate an assessment methodology and data gathering strategy that addresses all components of the pilot program and includes key features of a sound plan. 3. Identify criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. 4. Develop a detailed data-analysis plan to track the pilot program’s implementation and performance and evaluate the final results of the project and draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. 5. Ensure appropriate two-way stakeholder communication and input at all stages of the pilot project, including design, implementation, data gathering, and assessment. We also determined that the updated design for both portions of the Section 5 Pilot meets the statutory requirements for the pilot established under the DATA Act. Specifically, the DATA Act requires that the pilot program include the following design features: (1) collection of data during a 12-month reporting cycle; (2) a diverse group of federal award recipients and, to the extent practicable, recipients that receive federal awards from multiple programs across multiple agencies; and (3) a combination of federal contracts, grants, and subawards with an aggregate value between $1 billion and $2 billion. Although this represented significant progress since April 2016, we identified an area where further improvement is still needed. Specifically, the plan for the procurement portion of the pilot does not clearly describe and document how findings related to centralized certified payroll reporting will be more broadly applicable to the many other types of required procurement reporting. This is of particular concern given the diversity of federal procurement reporting requirements. Implementation of the grants portion of the pilot is currently under way, but the procurement portion is not scheduled to begin until early 2017. Department of Health and Human Services officials and OMB staff told us that they are recruiting participants and have begun administering data collection instruments for all components of the grants portion of the pilot. However, in late November 2016, OMB staff and General Services Administration officials informed us that they decided to delay further implementation of the procurement portion of the pilot in order to ensure that security procedures designed to protect personally identifiable information were in place. As a result, General Service Administration officials expect to be able to begin collecting data through the centralized reporting portal sometime between late January 2017 and late February 2017. OMB staff stated that despite the delay, they still plan on collecting 12 months of data through the procurement pilot as required by the act. In our report being released today, we made a new recommendation to OMB that would help ensure that the procurement portion of the Section 5 Pilot better reflects leading practices for effective pilot design. In commenting on the report being released today, OMB neither agreed nor disagreed with the recommendation, but provided an overview of its implementation efforts since passage of the DATA Act. These efforts include issuing three memorandums providing implementation guidance to federal agencies, finalizing 57 data standards for use on USASpending.gov, establishing the Data Standards Committee to develop and maintain standards for federal spending, and developing and executing the Section 5 Pilot. OMB also noted that, along with Treasury, it met with each of the 24 CFO Act agencies to discuss the agency’s implementation timeline, unique risks, and risk mitigation strategy and took action to address issues that may affect successful DATA Act implementation. According to OMB, as a result of these one-on-one meetings with agencies, OMB and Treasury learned that in spite of the challenges faced by the agencies, 19 of the 24 CFO Act agencies expect that they will fully meet the May 2017 deadline for DATA Act implementation. Treasury also provided comments on our report being released today. In its comments, Treasury provided an overview of the steps it has taken to implement the DATA Act’s requirements and assist agencies in meeting the requirements under the act, including OMB’s and Treasury’s issuance of uniform data standards, technical requirements, and implementation guidance. Treasury’s response also noted that as a result of the aggressive implementation timelines specified in the act and the complexity associated with linking hundreds of disconnected data elements across the federal government, it made the decision to use an iterative approach to provide incremental technical guidance to agencies. Treasury noted, among other things, that this iterative approach enabled agencies and other key stakeholders to provide feedback and contribute to improving the technical guidance and the public website. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Paula M. Rascona at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Michael LaForge (Assistant Director), Peter Del Toro (Assistant Director), Maria Belaval, Aaron Colsher, Kathleen Drennan, Thomas Hackney, Diane Morris, Katherine Morris, and Laura Pacheco. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16-824R. Washington, D.C.: August 3, 2016. DATA ACT: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014. Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013. Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The DATA Act requires OMB and Treasury to establish government-wide data standards and requires federal agencies to begin reporting financial and payment data in accordance with these standards by May 2017. The act also requires establishment of a pilot program to develop recommendations for simplifying federal award reporting for grants and contracts. Consistent with GAO’s mandate under the act, the report being released today is one in a series that GAO will provide to the Congress. This statement discusses steps taken by OMB, Treasury, and federal agencies to implement the act and highlights key findings and the recommendation from GAO’s report ( GAO-17-156 ). As part of this work, GAO reviewed DATA Act implementation plan updates and interviewed staff at OMB, Treasury, and other selected agencies. The Office of Management and Budget (OMB), the Department of the Treasury (Treasury), and federal agencies have taken steps to implement the Digital Accountability and Transparency Act of 2014 (DATA Act); however, more work is needed for effective implementation. Data governance and the transition to a new administration . OMB and Treasury have established a new Data Standards Committee responsible for maintaining established standards and developing new data elements or data definitions. Although this represents progress, more remains to be done to establish a data governance structure that is consistent with leading practices to ensure the integrity of data standards over time. The transition to a new administration presents risks to implementing the DATA Act, including a potential shift in priorities. The lack of a robust and institutionalized data governance structure for managing efforts going forward also presents risks regarding the ability of agencies to meet the statutory deadlines in the event that priorities shift over time. Implementation plan updates . According to the 24 Chief Financial Officers (CFO) Act agencies’ implementation plan updates, most of them continue to face challenges implementing the DATA Act. GAO identified four overarching categories of challenges reported by agencies that may impede their ability to effectively and efficiently implement the DATA Act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. To address these challenges, most agencies reported taking mitigating actions, such as making changes to internal policies and procedures, leveraging existing resources, and employing manual and temporary workarounds. However, the information reported by the CFO Act agencies in their implementation plan updates indicates that some agencies are at increased risk of not meeting the May 2017 reporting deadline because of these challenges. In addition, inspectors general for some agencies have issued readiness review reports indicating that their respective agencies are at risk of not meeting the reporting deadline. Operationalizing data standards and technical specifications for data reporting . In November 2016, OMB issued additional guidance on how agencies should report financial information involving specific transactions, such as intragovernmental transfers, and how agency senior accountable officials should provide quality assurances for submitted data. Although OMB has made some progress with these efforts, other data definitions lack clarity which still needs to be addressed to ensure that agencies report consistent and comparable data. In September 2016, Treasury updated its version of the DATA Act broker, which it stated was fully capable of performing the key functions of extracting and validating agency data. Treasury officials stated that although they plan to continue to refine the broker to improve its functionality and overall user experience, they have no plans to alter key functions. Agencies have reported making progress creating their data submissions and testing them in the broker, but work remains before actual reporting can begin. Some agencies reported in their implementation plan updates that they developed plans for interim solutions, but some of these interim solutions rely on manual processing, which can be burdensome and increase the risk for errors. Pilot to reduce recipient reporting burden . GAO's review of the revised design for both the grants and procurement portions of the pilot found that they partly met each of the leading practices for effective pilot design. Although this represented significant progress since April 2016, GAO identified an area where further improvement is still needed. Specifically, the plan for the procurement portion of the pilot does not clearly describe and document how findings related to centralized certified payroll reporting will be applicable to other types of required procurement reporting. Further, in November 2016, this portion of the pilot was delayed to ensure that security procedures were in place to protect personally identifiable information. In addition to prior recommendations that GAO has made, in its most recent report, GAO recommends that OMB take steps to help ensure that the procurement portion of the pilot better reflects leading practices for effective pilot design. OMB neither agreed nor disagreed with the recommendation.
Ships’ crews are often able to complete voyage repairs while the ship or battle group is underway. According to Navy officials, because ships often include redundant systems, repairs can usually be undertaken without interrupting the ship’s mission or be postponed until the ship reaches a repair facility or its home port. However, voyage repairs are occasionally beyond the capability of ships’ crews to complete, and must be performed by an intermediate or depot-level ship repair activity. Historically, Navy ships home-ported in Guam were permitted by U.S. law to be overhauled, repaired, or maintained in shipyards outside the United States or Guam. However, the John Warner National Defense Authorization Act for Fiscal Year 2007 amended section 7310 of Title 10 of the U.S. Code to prohibit U.S. naval ships home-ported in Guam from being repaired in shipyards outside the United States or Guam, other than in the case of voyage repairs. Since the closure of the Navy Ship Repair Facility, Guam, the Navy and MSC have relied on four different sources to provide voyage repairs in Guam. First, the Navy submarine tender USS Frank Cable, which is a ship home-ported in Guam, has provided voyage repair capabilities for submarines when needed. Second, the Navy has relied on its Emergent Repair Facility to repair submarines by using a repair crew left behind from the USS Frank Cable when that ship is deployed. Third, fly-away teams from U.S. Naval shipyards have been sent to Guam to conduct voyage repairs when needed. Finally, the Navy has used its contract with Guam Shipyard for voyage repairs of both submarines and surface ships. Guam Shipyard has repaired most MSC ships operating around Guam and has assisted the Navy in completing voyage repairs on other ships and submarines. For example, Guam Shipyard assisted U.S. Naval shipyards with extensive voyage repairs on the USS San Francisco, a submarine that struck an undersea mountain, by providing dry-dock services and selected support services. Voyage repairs have averaged about 17 percent of the total annual workload performed at Guam Shipyard. While Guam Shipyard officials told us that the voyage repair work would not be sufficient to support its current infrastructure and personnel, in 2007 it won a competition for the overhaul of the USNS Bridge, an MSC Pacific fleet support vessel. Competitions for overhaul of other MSC ships operating near Guam are scheduled beyond 2008. While Guam Shipyard has been the only commercial shipyard capable of supporting Navy ship repair and overhaul requirements on Guam since 1998, a private ship repair provider new to Guam, Gulf Copper, has initiated ship repair operations there. Although the Navy had indicated in its 2007 report to Congress that additional voyage repairs could be addressed by the submarine tender USS Frank Cable’s repair department, MSC has awarded contracts to both Guam Shipyard and Gulf Copper for voyage repairs that may be needed during fiscal year 2008. MSC awarded single-year contracts without renewal options, but MSC officials said that they plan similar contracts for 2009 that will include option years. Voyage repairs are unscheduled, and the capabilities required to address them cannot be precisely predicted. The Navy has not identified voyage ship repair requirements for 2012 and beyond for surface vessels operating at or near Guam, although some information is available on which to base estimated requirements to support planning efforts. Navy officials stated that requirements have not been developed for the following three reasons. First, the Navy has not fully identified its future Pacific force structure or finalized operational plans. Second, the Marine Corps’ plans for additional vessels, if any, and operations at Guam are still evolving. Third, MSC projects making changes to its force structure for ships operating near Guam. However, some information is available that could enable the Navy to develop estimates of ship repair requirements. Estimation of requirements is a prerequisite for assessing each option’s ability to address those requirements in a cost- effective and timely fashion. Without developing estimated repair requirements, the Navy cannot determine the best alternative among various potential sources of repair or support planning to provide needed maintenance capabilities. Navy officials stated that voyage ship repair requirements at Guam cannot be identified until its future force structure plans are finalized. The 2006 Quadrennial Defense Review indicated that the Navy plans to operate six aircraft carrier strike groups and 60 percent of its submarine force in the Pacific. Moreover, the service has plans for a 313-ship Navy, but it has not yet identified the specific ships that will comprise the force structure in the Pacific beyond 2012. Officials stated that operational plans will dictate the number and type of vessels that will visit Guam, but those plans are periodically adjusted due to changes in the global security environment. As a result, Navy officials stated that they cannot yet develop requirements for voyage ship repairs at Guam for 2012 and beyond. Similarly, the Marine Corps’ plans for additional vessels in Guam have not been finalized, but conceptual plans for relocating Marines from Okinawa to Guam may include the home-porting of four new High-Speed Vessels and two new Littoral Combat Ships at Guam. In addition to the possibility of adding vessels, the Marine Corps’ force relocation from Okinawa to Guam is expected to result in visits by amphibious vessels home-ported in Japan. These vessels are to deploy to Guam to support training exercises for the Marines stationed on Guam, and they may generate demands for voyage repairs during these operations. MSC also expects changes to its force structure operating near Guam, but the timeline for these changes is uncertain. Current MSC vessels, such as ammunition ships and combat stores ships, are expected to be replaced by new dry cargo/ammunition ships on a one-for-one basis. MSC officials believe that these new vessels will require less maintenance than the vessels they replace, thus potentially reducing repair requirements. For example, these vessels use new technology, including propulsion and electrical systems that are thought to require less frequent maintenance and different repair capabilities. Guam’s first new dry cargo/ammunition ship is to arrive on station sometime in 2008, but acquisition schedules for additional such ships indicate deployment delays. Delaying the arrival of the new ships will delay decommissioning of the older ships, thus raising questions about the need to continue existing levels of repair capabilities in the near term, as MSC believes the older ships may require more intensive maintenance. While the precise force structure requirements associated with the military buildup around Guam remain uncertain, the Navy has some information that can be used to identify estimated ship repair requirements. Specifically, the Navy knows the history of voyage repairs conducted on Guam; it can identify vessels likely to operate near Guam, based on planned force structure realignments in the 2006 Quadrennial Defense Review; and it can identify ship repair capabilities available at other strategic locations in the area, including Pearl Harbor, and Yokosuka, Japan. Historical data are available showing voyage repairs that have been performed on surface vessels and submarines in Guam for at least the past 6 years, and could be used to estimate likely future repair requirements based on past experience. MSC recently used these data to formulate contracts awarded for providing voyage repairs on vessels operating at or near Guam for fiscal year 2008. Table 1 shows the average number of man- days and the cost to complete voyage repairs from private sources on Guam for fiscal years 2002-2007. The Navy has identified some vessel assignments associated with the force structure changes identified in the 2006 Quadrennial Defense Review. Specifically, the Navy plans to replace the USS Kitty Hawk at its home port in Japan with the USS George Washington—a new, nuclear-powered aircraft carrier. Navy officials stated that operational plans for that carrier’s strike group will include visits to Guam for periods of 2 to 3 weeks. Although the Navy has not identified the specific vessels that will make up the strike group, Navy officials know the types of vessels that are normally part of a strike group. Moreover, Navy vessels have operated in the Pacific for decades, and voyage repair experiences are readily available to the Navy through repair records, shipyard billing, or similar documents. Nonetheless, the Navy has not used these records to forecast estimated surface ship repair requirements for Guam beyond 2012. Further, extensive ship repair capabilities exist in other locations in the Pacific, such as Pearl Harbor. Given that future ship repair capabilities on Guam may need to support a larger number and different mix of ships, the Navy could use ship repair data from Pearl Harbor and other strategic forward-deployed locations—such as the Navy Ship Repair Facility, Yokosuka, Japan, and the facility that repairs the Navy amphibious ships that support the Marine Corps at Sasebo, Japan—to help it develop estimated voyage repair forecasts for Guam. DOD guidance requires that maintenance programs be clearly linked to strategic and contingency planning, and that a determination be made as to whether a specific industrial capability is required to meet DOD needs. This guidance calls for the Navy to follow industrial-based planning to ensure that required ship repair capabilities will be available when needed. Specifically, DOD Directive 5000.60, “Defense Industrial Capabilities Assessments,” requires that planning occur when a known or projected problem exists, or when there is a substantial risk that an essential capability may be lost. Such problems can consist of inadequate industrial capacity operated by a DOD entity or similar inadequate capabilities in the private sector. Estimation of requirements is a prerequisite for performing an assessment of the viability of each option available for addressing those requirements in a cost-effective and timely fashion. Although some information is available for developing estimated requirements, the Navy has not identified voyage surface ship repair requirements for 2012 and beyond for vessels operating near Guam. Without developing estimated repair requirements the Navy cannot determine the best alternative among various potential sources of repair or support planning to provide needed maintenance capabilities. While the Navy has not planned for meeting voyage repair requirements on Guam for 2012 and beyond, it has identified options for providing repairs, although some require long lead times to implement. However, by not performing timely planning the Navy risks not having a repair capability in place when needed, and as time passes, limits the options that may be available to it. Navy officials have stated that they do not intend to develop plans for a voyage ship repair capability on Guam until preparations for the 2012 budget cycle begin. However, in response to our inquiries, the Navy identified four potential options for meeting future voyage ship repair requirements on Guam and acknowledged that it cannot avoid doing some voyage repairs there. First, the Navy could use existing Navy- owned voyage repair capabilities in Guam, though these face certain limitations in their ability to take on additional voyage repairs. Second, fly- away teams could be brought in from Navy-owned shipyards in the United States, and these teams would rely on facilities and infrastructure in place on Guam. Third, the Navy could develop a new repair facility, which would entail significant planning, repair of existing infrastructure, and possibly new military construction. Fourth, the Navy could contract out the work to either or both of the existing private ship repair providers or to any other contractor that might choose to locate at Guam. DOD guidance requires that a determination be made as to whether a specific industrial capability is required to meet DOD needs and that a selection be made for meeting those needs. Moreover, Navy officials acknowledge that if the option to expand existing Navy repair capabilities on Guam or establish new Navy repair capabilities were chosen, early identification of mission requirements would be needed to facilitate planning and budgeting of new or expanded Navy construction to ensure that a fully functioning Navy- owned ship repair facility would be operational in 2012. Existing Navy-owned capabilities in Guam are inadequate to address current voyage repair requirements for surface vessels and are unable to address additional voyage repair requirements without increased capabilities and capacity. First, the primary mission for the USS Frank Cable is to provide maintenance and support for the three fast attack submarines home-ported on Guam, and to address the needs of visiting submarines. At the time of our review, the submarine tender’s repair crew was operating at full capacity in meeting its primary mission. As a result, the Navy contracted with Guam Shipyard to complete $1.2 million in voyage repairs on submarines between fiscal years 2002 and 2007, mostly to provide additional manpower to augment the submarine tender’s repair crew. Although the Navy has not developed voyage repair plans for surface ships, it has developed some plans for the provision of voyage and other repairs for submarines. For example, current plans will require the USS Frank Cable to provide support for the new guided missile submarine that will visit Guam for rotational crewing. Additionally, the Navy plans to use part of the repair crew from the USS Frank Cable to perform repair services for the submarine tender USS Emory S. Land, which will be stationed at Diego Garcia in the British Indian Ocean Territories. The repair crew on the USS Frank Cable will be increased by about 170 personnel to enable about 160 to rotate for workload assignments on the USS Emory S. Land, leaving no more than 10 repair personnel to take on additional work. As a result, according to Navy officials, it is unlikely that the USS Frank Cable could provide voyage repairs for surface vessels in Guam in the future without adding capability and capacity beyond the 170 additional personnel already planned. Second, the Emergent Repair Facility on Guam that supports submarines when the USS Frank Cable is away from port lacks the capability to meet surface voyage repair requirements. This facility is used by a stay-behind repair crew from the USS Frank Cable when that ship is away from its home port. According to Navy officials, the Emergent Repair Facility is not adequate even for its current role. Officials estimated that the Navy would need about $21 million to expand and equip the facility just to meet its current submarine mission requirements, without taking on additional voyage repairs for surface ships. For example, the facility has no communications capabilities; repair personnel must use personal cellular telephones for any necessary communications. Navy officials acknowledge that it would have to be expanded to meet any future surface voyage repair requirements. Moreover, larger vessels may be unable to approach the Emergent Repair Facility without conducting dredging operations and completing pier improvements. As a result the Emergent Repair Facility cannot be used to provide voyage repairs for surface vessels without considerable planning and capital investment. The effective use of fly-away teams from Navy-owned shipyards in the continental United States to perform voyage repairs at Guam depends on the ability of U.S. Naval shipyards to provide personnel to perform repairs without negatively impacting their own ongoing work, as well as on the adequacy of infrastructure and facilities available for their use in Guam. Further, U.S. Naval shipyards have not been provided with voyage repair estimates to conduct workload planning and determine their capacity to provide fly-away teams to Guam. The use of fly-away teams may not be practicable or cost-effective for performing large amounts of voyage repair work, because Navy-owned shipyards in the United States that provide fly- away teams are currently operating beyond their target capacities, although they anticipate having excess capacity in the coming years. However, deploying fly-away teams to Guam to meet large amounts of voyage repair requirements without advance planning could undermine scheduled maintenance at the U.S. Naval shipyards. Fly-away teams also need sufficient infrastructure and equipment at the location at which they will conduct voyage repairs. Because the USS Frank Cable and the Emergent Repair Facility both face limitations, fly-away teams that deploy to Guam cannot be assured that these facilities would be available to provide needed infrastructure or equipment. Without more clearly defined repair requirements and further examination of equipment and personnel necessary to meet those requirements, the viability of using fly-away teams to provide future voyage repairs is uncertain. Building a new Navy depot-level repair capability would require years of planning and additional infrastructure, equipment, personnel, and funding. If the lease on the property at the former Naval Ship Repair Facility, Guam, is allowed to expire, establishing a new Navy-owned ship repair capability at that location would require the Navy to address infrastructure, equipment, and personnel requirements to create the capability needed to meet surface voyage repair requirements on Guam. The Navy would have to determine what capability is needed and then take action to acquire the equipment to provide that capability. Furthermore, infrastructure repairs may be needed to support work on Navy vessels. For example, according to Navy officials the typhoon moorings at Guam Shipyard may require repair. A new Navy depot-level ship repair capability in Guam would also require staffing by military and civilian personnel. Without a determination of equipment, infrastructure, personnel, and funding requirements for providing new surface ship repair capabilities, the Navy cannot know whether establishing a new ship repair capability in Guam is a viable option. Additionally, implementing this option would also require significant lead time. The Navy has not determined the extent to which it will rely on private- sector ship repair providers beyond 2012, when the lease on Navy property occupied by Guam Shipyard expires. While it is unclear what kind of private sector capability will be available beyond 2012, both private ship repair providers operating in Guam have been awarded 1-year contracts by MSC to provide selected voyage repairs to surface vessels operating at or near Guam for fiscal year 2008. According to MSC officials, new contracts are to be executed by the end of fiscal year 2008, and this contracting arrangement will include option years that address voyage repair requirements for MSC ships through 2012. Guam Shipyard operates on Navy property located within Naval Base, Guam. Gulf Copper operates from approximately 700 feet of pier space at the commercial port opposite Navy property on Apra Harbor. It is possible that additional private ship repair providers may express interest in performing voyage repairs at Guam in the future, and that Guam Shipyard may continue operations at another location in Guam beyond 2012 when its lease on U.S. Navy property expires. Figure 1 depicts the physical locations of Guam Shipyard and Gulf Copper. The Joint Depot Maintenance Program provides guidance on selecting sources of maintenance and repair, and a DOD Handbook entitled Assessing Defense Industrial Capabilities provides a framework for coordinating analysis and determining the most cost- and time-effective options for meeting DOD needs. If the option selected by the Navy for providing ship repairs in Guam requires military construction, as may be the case if the Navy chooses to expand existing Navy-owned capabilities or to establish new Navy-owned capabilities, the military construction requirements would have to be included in the budgeting process for fiscal year 2010 in order for new facilities to be ready by October 2012. However, Navy officials have stated that they do not intend to develop plans for a voyage ship repair capability on Guam until preparations for the 2012 budget cycle begin. Without performing an assessment of the viability of each of the options for voyage repairs in a timely manner to support planning and budgeting of critical tasks, the Navy risks not having adequate voyage repair capabilities in place when needed to support operations in the Pacific Ocean, and as time passes, limits the options that could be available to it by 2012. The Navy has not effectively identified voyage repair requirements that are a prerequisite for selecting among the options to provide such capabilities on Guam. While the Navy does not fully know its voyage surface ship repair requirements near Guam for 2012 and beyond, it does possess data that could be used to estimate requirements. Namely, it could use existing ship repair experiences, projected requirements identified in the 2006 Quadrennial Defense Review, and information about repair capabilities maintained at other strategic locations to identify its ship repair requirements for Guam in the near term and to aid in developing a baseline forecast of repair capabilities it will need for 2012 and beyond. Moreover, the requirements determination process is a precursor to planning for the provision of ship repair capabilities and selecting an option to provide those capabilities, since a certain amount of lead time would be required to implement some of the options. Additionally, a decision about future industrial repair requirements should be an integral part of ongoing Guam infrastructure planning to support the transfer of Marines to Guam from Japan. However, the Navy has not developed such plans, nor has it assessed the challenges associated with the options identified, or selected an option to provide ship repair capabilities on Guam. Without identifying requirements, performing a risk-based assessment of the viability and costs of each of the options, selecting the best option or combination of options available, and then developing and implementing an action plan to address any challenges associated with the option or options selected, the Navy lacks reasonable assurance that it will have sufficient time to prepare the best option or combination of options for meeting future surface ship repair requirements on Guam beyond 2012. To ensure that adequate voyage repair capabilities are available for ships operating near Guam, and recognizing the lead time required to implement options, we recommend that the Secretary of Defense direct the Secretary of the Navy to estimate requirements for repairs for surface vessels operating at or near Guam based on data determined to be most appropriate by the Secretary of the Navy; assess the benefits and limitations of each of the options for providing repairs to ships operating near Guam, and perform an assessment of anticipated costs and risks associated with each option; and select the best option or combination of options for providing repair capabilities to support surface ships operating near Guam, and develop a plan and schedule for implementing a course of action to ensure that the required ship repair capability will be available by October 2012. In a written response to a draft of this report, DOD concurred with all of our recommendations with comments. The department’s comments are reprinted in their entirety in appendix II. The department also provided several technical comments that have been incorporated as appropriate. With regard to our first recommendation for an assessment of requirements for repairs for surface vessels operating at or near Guam, the Navy responded that it has a methodology to determine annual emergent repair requirements by ship class and fleet—which includes voyage repair execution history as a subset—and that this requirement will be included in the future years defense plan, and that no further direction is necessary. While we acknowledge that the Navy looks at overall maintenance requirements as a part of the annual budget process, this process does not provide a detailed listing of specific capabilities required for voyage repairs at strategic locations, such as Guam beyond 2012. Given its unique location and the changing circumstances that will impact voyage repair requirements in and around that location, we continue to believe that a specific assessment of requirements for providing surface vessel voyage repairs in Guam represents a necessary baseline for planning for the provision of ship repair capabilities beyond 2012 and for the selection of an option or combination of options to provide those capabilities. In concurring with our second recommendation regarding the need for an assessment of the benefits and limitations of each of the options for providing repairs to ships operating near Guam, the department’s response was that the Navy has already identified a plan for providing repair capabilities for ships operating near Guam and that the Navy has determined that establishing a new repair facility on Guam is not viable since the expenditure of funds to do this is not necessary. The department’s response also noted that the Navy is already developing a military construction project to expand the existing repair capabilities on Guam in fiscal year 2010, that the Navy intends to continue the practice of utilizing repair teams from U.S. Naval shipyards and private shipyards as needed, and that the Navy intends to continue the practice of contracting voyage repair work to one or more private ship repair providers. The Navy may have determined that a new repair capability on Guam is not necessary, but much of the existing repair equipment currently used to support voyage repair on surface vessels—including floating dry dock, floating crane, and industrial equipment—are owned by Guam Shipyard and could potentially be removed at the conclusion of the existing lease, if a new lease were not negotiated. We continue to believe that it is essential that the department determine whether it will have continued need for expensive capital equipment such as the floating dry dock and crane, and whether the capability provided by such equipment will be available from the private sector. Finally, it is commendable that the Navy has a plan for providing ship repair capabilities on Guam and is moving forward to implement it. However, at the time of our exit briefing with the Navy in January, the Navy did not inform us of this plan. Moreover, Navy officials have told us that this plan was developed in February, subsequent to our exit briefing and in response to our recommendations. In concurring with our third recommendation regarding selection of the best option or combination of options for providing repair capabilities to support surface ships operating near Guam, the department stated again that the Navy’s plan for providing repair capabilities to support surface ships operating near Guam has already been determined, and that direction from the Secretary of Defense to the Secretary of the Navy is not needed. The response also stated that committing the Navy to a lease agreement in 2008 for a capability in 2012 is premature. While we agree that committing the Navy to a lease in 2008 for a capability required in 2012 is premature, it is not premature to decide whether or not there will be an industrial activity—either owned and operated by the government or leased by a private contractor—within the Navy installation. The department stated in its response that the Navy intends to use private- sector capability, but it did not state whether that would be on the Navy installation on Guam. Given the detailed planning that is required to support the planned buildup of military personnel expected over the next few years in Guam, we believe it is essential that the Navy determine whether or not it expects to continue to have an industrial activity operating as a part of the Guam Master Plan, and that it determine what acreage this activity would occupy. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. If you or your staff has any questions about this report, please contact me on (202) 512-4523 or at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Additional contacts and staff acknowledgments are provided in appendix III. To determine the extent to which the Navy has identified future ship repair requirements for ships operating in the Guam area and assessed options to address those requirements, we reviewed documents related to ship maintenance. In addition, we interviewed officials responsible for force structure planning, contracting for repairs on vessels belonging to the U.S. Navy and Military Sealift Command, and performing repairs on vessels belonging to the Navy and Military Sealift Command on Guam as well as related organizations in Hawaii, and on the west coast of the United States. Specifically, we interviewed officials and analyzed documents related to ship repair requirements and the options proposed to meet them at the offices of the Chief of Naval Operations; the Commander, Pacific Fleet; the Commander, Marine Forces Pacific; the Commander, Naval Sea Systems Command; the Commander, Naval Forces Marianas; the Chief of Naval Installations; the Commander, Military Sealift Command; the Commander, Naval Facilities Pacific; and the Guam Economic Development and Commerce Authority. We also performed work at the offices of several private ship repair providers to determine the extent to which private-sector repair capabilities may be available on Guam in the future. We also examined Department of Defense (DOD) policy and Joint Guidance for providing maintenance and repair of DOD assets afloat. We performed our review from July 2007 to January 2008 in accordance with generally accepted government audit standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Julia Denman, Assistant Director; Jeffrey Kans; Julia C. Matta; John E.Trubey; and Cheryl Weissman made major contributions to this report.
Unscheduled ship maintenance, known as voyage repairs, is a high priority for the U.S. Navy. Such repairs are sometimes beyond the capability of the ship's crew to perform; cannot be deferred; and must be made at a remote location. After the 1995 Base Realignment and Closure Commission recommended closing the former Naval Ship Repair Facility, Guam, the Navy leased the property at that facility to the Guam Economic Development and Commerce Authority, which sub-leased the property to a private shipyard. DOD has since begun planning for a military buildup on Guam. In January 2007 the Navy recommended allowing the private shipyard's lease on Navy land to expire in 2012. Consequently, the House Armed Services Committee asked GAO to determine the extent to which the Navy has (1) identified future ship repair requirements at Guam, and (2) identified and assessed options to address those requirements. GAO reviewed documents related to ship maintenance and interviewed officials affiliated with private contractors, the Guam government, the Marine Corps, Military Sealift Command, and the Navy in conducting this review. The Navy has not identified voyage surface ship repair requirements for 2012 and beyond for vessels operating near Guam, although some information is available on which to base estimated requirements for planning. Navy officials stated that they cannot estimate such requirements because the Navy expects to change its force structure, the Marine Corps has not finalized its plans for any additional vessels associated with the buildup, and Military Sealift Command expects changes to its force structure at Guam. Although the Navy, Marine Corps, and Military Sealift Command have not made final force structure decisions or operational plans for vessels operating at or near Guam, information is available to support an estimation of ship repair requirements as part of the multiyear planning and budgeting process. Specifically, the Navy (1) knows the history of voyage repairs conducted on Guam; (2) can identify vessels likely to operate near Guam based on planned force structure realignments in the 2006 Quadrennial Defense Review; and (3) can identify ship repair capabilities available at other strategic locations in the Pacific area, including Yokosuka, Japan. Developing requirements is a prerequisite for planning, and without developing estimated repair requirements the Navy cannot adequately evaluate options for meeting them. Navy officials identified potential options for providing repairs in Guam, but have not fully assessed their viability or identified time-critical planning tasks. According to Navy officials, once the Navy identifies voyage ship repair requirements for the Guam area, they will choose from four options or a combination of options for providing voyage repairs. First, the Navy could try to expand existing organic repair capabilities to conduct voyage repairs. However, the existing ship maintenance capabilities and facilities have little excess capacity without augmentation, limiting their ability to perform additional work. Second, the Navy could rely on repair teams flown in from naval shipyards in the United States. Third, the Navy could build a new Navy ship repair facility, though that could require years of planning and new funding. Fourth, the Navy could contract out work to either or both of the private ship repair providers now operating in Guam, or to any other private ship repair facility that might choose to locate in Guam. Three of these options might require building new facilities or expanding existing facilities. Officials said they would not begin planning until preparations begin for submissions to the President's budget for fiscal year 2012. However, lead time is required to perform planning tasks necessary to provide repair capabilities from the Navy's suggested options. Without assessing the viability of each option for voyage repairs in a timely manner, the Navy increases the risk that voyage repair capabilities for ships operating in the Pacific may not be available when needed, potentially undermining ships' ability to accomplish their missions.
In fiscal year 2003, the Department of Housing and Urban Development (HUD) expended about $28 billion in rental assistance—about 75 percent of the department’s total expenditures—to help almost 5 million low- income tenants afford decent housing. HUD provides rental assistance through three major programs: Housing Choice Vouchers (vouchers), public housing, and several project-based Section 8 programs. These programs reduce tenants’ rental payments by providing subsidies to owners of private properties, the public housing agencies (PHA) responsible for government-owned developments, or both. Because these subsidies involve complicated calculations and program rules, the process of determining them is prone to errors. In response to growing concerns about improper rental assistance payments, in fiscal year 2001 HUD established the Rental Housing Integrity Improvement Project (RHIIP), which is designed to address the causes of these errors and ensure that only eligible people receive subsidies. This report discusses (1) the sources and magnitude of improper payments that HUD has identified, (2) the actions HUD is taking under RHIIP to reduce improper payments in the voucher and public housing programs and the status of these initiatives, (3) the actions HUD is taking under RHIIP to reduce improper payments in its project-based programs and the status of these initiatives, and (4) the status and potential impact of HUD’s efforts to reduce the risk of improper payments by simplifying the subsidy determination process. HUD’s voucher, public housing, and project-based assistance programs share the common mission of making housing affordable to low-income households. The subsidies these programs provide are not an entitlement. Typically, the number of low-income households eligible for assistance exceeds the number of subsidized units and vouchers that is available. Specifically, HUD estimated that in 1999 about a quarter of all households eligible for housing assistance received it. HUD’s programs are administered differently and vary in the number of households they assist and the amount of funding they receive. The voucher program, which local PHAs administer on HUD’s behalf, is HUD’s largest rental assistance program. The program, authorized under Section 8 of the United States Housing Act of 1937, as amended, provides housing vouchers that eligible individuals and families can use to rent houses or apartments in the private housing market from property owners participating in the program. Voucher holders are responsible for finding suitable housing, which must meet HUD’s housing quality standards. In fiscal year 2003, the program assisted about 2 million households (42 percent of all households receiving HUD housing assistance) and had outlays of $13.4 billion (47 percent of HUD’s total rental assistance outlays). In general, only households with very low incomes—those with incomes that are less than or equal to 50 percent of area median income (AMI)—are eligible for vouchers. In addition, the legislation requires that at least 75 percent of new participants in the voucher program have extremely low incomes—that is, their incomes must be at or below 30 percent of AMI. Voucher holders generally pay 30 percent of their adjusted monthly income toward rent, and the PHA receives HUD subsidies to pay the remainder of the rent to the property owners. The subsidies in the voucher program are tenant based—that is, they are tied to the household rather than to the rental unit. The approximately 2,500 PHAs that administer the voucher program are responsible for ensuring that tenants meet program eligibility requirements and that tenant subsidies are calculated properly. PHAs are also required to develop written policies and procedures to administer the program according to HUD regulations. Under the public housing program authorized by United States Housing Act of 1937, as amended, HUD subsidized the development, operation, and modernization of government-owned properties, which are currently managed by some 3,300 PHAs. In fiscal year 2003, HUD’s public housing program assisted 1.2 million households (25 percent of households receiving housing assistance) and had outlays of $7.1 billion (25 percent of HUD’s total rental assistance outlays). To be eligible for public housing, a household must be low income—that is, have an income that is less than or equal to 80 percent of AMI—and the legislation stipulates that at least 40 percent of new residents have extremely low incomes—less than or equal to 30 percent of AMI. As in the voucher program, public housing tenants generally pay 30 percent of their adjusted monthly income on rent. HUD pays subsidies to the PHAs for the remainder to cover the difference between the PHAs’ operating costs and rental receipts. In contrast to the voucher program, the subsidies in the public housing program are project based—that is, they are tied to the unit, and tenants receive assistance only when they live in units eligible for subsidies. PHAs are responsible for ensuring that tenants are eligible for public housing, that tenant subsidies are calculated properly, and that the PHAs’ policies and procedures conform to HUD’s regulations. Under a variety of project-based Section 8 programs authorized by the Housing and Community Development Act of 1974, as amended, HUD has subsidized rents with multiyear rental assistance payments, which have often been combined with construction subsidies from other HUD programs. These programs included the New Construction, Substantial Rehabilitation, Loan Management Set-Aside, Property Disposition, and Moderate Rehabilitation programs. Before project-based Section 8 programs, HUD had provided rental assistance through Rent Supplement and Section 236 Rental Assistance Payment programs. For ease of presentation, this report refers to all of these rental assistance programs as project-based Section 8. Property owners and managers for about 22,000 subsidized properties currently participate in these programs. In fiscal year 2003, HUD’s project-based programs assisted 1.6 million households (33 percent of all households receiving assistance from HUD) and had outlays of $7.7 billion (27 percent of HUD’s total rental assistance outlays). As in HUD’s other rental assistance programs, households receiving project- based Section 8 assistance generally pay 30 percent of their adjusted income toward rent and HUD pays a subsidy—in this case to property owners and managers—for the remainder of the rent. In general, only households with low incomes are eligible for HUD project-based Section 8 assistance, and at least 40 percent of new residents must have extremely low incomes. Private property owners and managers have similar requirements to PHAs for administering the project-based Section 8 program—they must ensure that tenants meet program eligibility requirements and that tenant subsidies are calculated correctly. They also must develop administrative policies and procedures that are consistent with HUD’s regulations. HUD’s oversight of program administrators varies depending on the program (see fig. 1). For vouchers and public housing, HUD field offices provide oversight of the PHAs that administer the programs. Field office staff conduct on-site reviews and analysis of PHAs’ operations. Field offices are also responsible for confirming the accuracy of information PHAs submit to HUD’s performance rating systems for vouchers and public housing: the Section 8 Management Assessment Program (SEMAP) and Public Housing Assessment System (PHAS), respectively. Both SEMAP and PHAS provide HUD managers with performance measures in key program areas, such as program management and the physical condition of properties. For HUD’s Section 8 project-based programs, contract administrators are responsible for overseeing individual Section 8 properties and ensuring that properties are in compliance with HUD’s policies. The administrators conduct on-site reviews of property owners’ tenant information files, process monthly payment vouchers, respond to health and safety issues, and renew rental assistance contracts. Currently, there are three different types of contract administrators: performance-based contract administrators, “traditional” contract administrators, and HUD field office staff (see chap. 4). Subsidies under HUD’s rental assistance programs are generally based on tenant households’ adjusted annual income, or gross income less any exclusions and deductions. Laws and HUD regulations provide for 44 different types of income exclusions and deductions. Of these, HUD’s regulations cite 20 income sources, such as income from minors, student financial aid, and qualifying employment training programs, which are excluded when determining households’ eligibility to receive assistance and calculating tenants’ rent. Nineteen other income sources qualify as exclusions under various statutes. For example, Earned Income Tax Credit refund payments received on or after January 1, 1991, are excluded, as is income from participating in AmeriCorps. A complete list and descriptions of these exclusions appear in appendix II. In addition to these 39 income exclusions, program administrators must also apply five income deductions, which reduce the amount of income that can be considered in calculating tenants’ rent. Legislation specifies the following five deductions from annual income: a standard amount ($480) for each dependent; a standard amount ($400) for elderly or disabled family members; unreimbursed child care expenses that are necessary for a family member to remain employed; the sum of the following to the extent that it exceeds 3 percent of annual certain unreimbursed medical expenses for elderly or disabled family certain unreimbursed attendant care and auxiliary apparatus expenses necessary for a disabled family member to be employed; and other deductions from annual income as determined by program administrator. Once program administrators have collected information from tenants on income and applicable exclusions and deductions, HUD policy requires that program administrators independently verify this information (third- party verification). To obtain third-party verification, program administrators must directly contact employers, welfare offices, health care providers, and others to ensure that the information tenants have reported is accurate and complete. However, third-party verification on its own may not identify all income not reported (intentionally or otherwise) by tenants. The program administrator must maintain all verified information in the tenant’s file. After verifying tenants’ income information, program administrators must compute the amounts tenants pay in rent. HUD regulations define these payments as the highest of the following amounts: (1) 30 percent of a family’s monthly adjusted income—that is monthly income after exclusions or deductions; (2) 10 percent of the family’s gross monthly income—that is, monthly income before exclusions or deductions; or (3) the applicable minimum monthly rent, which is typically between $0 and $50. Generally, the amount paid by low- and very-low- income tenants is not enough to cover the entire rent for a unit or, for public housing, to cover operating costs. As a result, for vouchers and project-based Section 8, HUD generally covers the difference between the unit’s rent and the tenant’s rental payment in the form of a housing assistance payment. For public housing, HUD pays the PHA an operating subsidy to cover the difference between the PHA’s operating costs and rental receipts. In this report, we refer to both types of payments as rent subsidies. RHIIP was created as a Secretarial Initiative in the spring of 2001 to ensure that the right benefits go to the right people. RHIIP was set up as a direct result of HUD’s analysis of data it collected on improper subsidy payments in fiscal year 2000. For the first time, HUD managers had access to statistically valid estimates of the extent, severity, costs, and sources of subsidy errors for vouchers, public housing, and project-based Section 8 nationwide. The results of the analysis were issued in a June 2001 report, Quality Control for Rental Assistance Subsidies Determinations. The report focused on subsidy errors made by program administrators but did not attempt to determine if the tenants supplied accurate and complete income information. In February 2002, HUD completed a separate evaluation to determine rental assistance errors caused by unreported tenant income. The study matched incomes tenants reported with income information from Internal Revenue Service and Social Security Administration databases. The results of these studies are examined further in chapter 2. Evaluations by GAO and HUD’s Office of Inspector General (OIG) have identified long-standing problems with HUD’s monitoring of program administrators responsible for making rent subsidy determinations. In 2001, GAO designated HUD’s rental housing programs as high risk for waste, fraud, and abuse because the department could not ensure that only eligible households received housing subsidies or that the households received the correct amounts. Also, HUD’s OIG reported on material weaknesses in HUD’s monitoring of program administrators in its financial audits of the department since 1996. The OIG found that these weaknesses had adversely affected HUD’s ability to ensure that program administrators were correctly calculating housing subsidies. RHIIP’s goal is to reduce the incidence and dollar amount of improper rent subsidies by 50 percent in fiscal year 2005 compared with fiscal year 2000, with interim goals of a 15 percent reduction by fiscal year 2003 and a 30 percent reduction by fiscal year 2004. RHIIP’s performance goals are largely drawn from The President’s Management Agenda, Fiscal Year 2002, which established nine agency-specific goals to improve federal management and performance. To accomplish RHIIP’s goals, HUD has initiated the following three program-level efforts to reduce improper subsidy payments (see chapters 3 and 4): Increased monitoring of program administrators to evaluate whether subsidy calculations are correct, third-party verification of information provided by tenants is sufficient, quality control procedures are adequate, and tenant files are complete; Income verification to allow PHAs or property owners to compare tenant income information, as reported by federal and state agencies, with the information reported by the tenant; and Additional training and guidance to provide HUD staff and program administrators with the tools necessary to understand the complex requirements for determining subsidies determination. HUD also initiated the following two overarching efforts under RHIIP: Error measurement to develop estimates of the magnitude of improper rent subsidy payments for all three programs and to assess progress in meeting RHIIP’s goals (see chapter 2); and Simplification of rent subsidy policies to develop approaches to reduce complexity of program rules that have resulted in an error-prone process (see chapter 5). To further assist its efforts under RHIIP, HUD has set up a RHIIP advisory group responsible for advising HUD’s principal staff on improper rental assistance payments and to provide support for planning and implementing corrective actions that will reduce the risk of improper payments to an acceptable level. The advisory group is composed of representatives from, among others, HUD’s program management and research offices. Members of the advisory group meet on a weekly basis to discuss progress and coordinate efforts. Our objectives were to determine (1) the sources and magnitude of improper rental assistance payments that HUD has identified, (2) the actions HUD is taking under RHIIP to reduce improper rental assistance payments in the voucher and public housing programs and the status of these initiatives, (3) the actions HUD is taking under RHIIP to reduce improper payments in the project-based Section 8 program and the status of these initiatives, and (4) the status and potential impact of HUD’s efforts to reduce the risk of improper payments by simplifying the subsidy determination process. The scope of this work was limited to HUD’s rental assistance programs under Housing Choice Vouchers, public housing, and project-based Section 8. To determine the sources and magnitude of improper rental assistance payments identified by HUD, we obtained fiscal year 2000 data on program administrator errors that HUD collected for its 2001 Quality Control for Rental Assistance Subsidies Determination report and similar data for fiscal year 2003. We tested the reliability of both data files and found them reliable for the purposes of this report. We estimated the total amount of improper rent subsidies for all three housing programs. Our estimated totals generally agreed with those in HUD’s fiscal year 2003 and 2004 Performance and Accountability Report. We also estimated improper rent subsidies per household. To illustrate the impact of improper rent subsidies, we estimated the number of households that could have received assistance under the voucher programs by dividing the estimated total net improper rent subsidy overpayments (i.e., total estimated subsidy overpayments minus total estimate subsidy underpayments) by the average cost of a voucher (including administrative costs) in fiscal year 2003. Appendix I contains detailed results of our analyses. We reviewed HUD notices, guidebooks, and reports, including HUD’s 2001 Quality Control for Rental Assistance Subsidies Determinations and HUD’s 2003 and 2004 Performance and Accountability Report. We interviewed HUD headquarters officials from the Office of Public and Indian Housing (for the vouchers and public housing programs), the Office of Housing (for project- based Section 8 programs), and the Office of Policy Development and Research. We also reviewed reports by and interviewed officials from HUD’s OIG. To describe the actions HUD is taking under RHIIP to reduce improper payments in the public housing and voucher programs and the status of these initiatives, we analyzed RHIIP status reports and schedules, obtained and reviewed relevant HUD policies and procedures, and interviewed officials at HUD headquarters and seven field offices responsible for the two rental assistance programs—Baltimore, Maryland; Boston, Massachusetts; Chicago, Illinois; Los Angeles, California; Miami, Florida; New York City, New York; and San Francisco, California. We selected these field offices based on the volume of rent subsidies they oversee and to achieve some geographic distribution. Together, these field offices oversaw about $7.8 billion in rent subsidies payments in fiscal year 2003, or 55 percent of the total. We also met with 14 of the largest PHAs responsible for administering the public housing and voucher programs in the HUD field office jurisdictions we visited and interviewed groups that represent state and local housing agencies and tenants. To assess HUD’s implementation of Rental Integrity Monitoring reviews and public housing authorities’ progress in reducing improper rental assistance payments, we obtained and reviewed HUD policies, procedures, and training materials on conducting these reviews, analyzed all 31 rental integrity monitoring reviews from 13 of the largest public housing authorities in the country, and reviewed HUD’s quality assurance reviews of HUD field office performance. To describe the actions HUD is taking under RHIIP to reduce improper payments in its project-based Section 8 programs and the status of these initiatives, we interviewed officials from HUD headquarters and at six HUD field offices responsible for these programs—Boston, Massachusetts; Chicago, Illinois; Los Angeles, California; New York City, New York; Philadelphia, Pennsylvania; and San Francisco, California. We also selected these field offices based on the volume of rent subsidies they oversee and to achieve some geographic distribution. Together, these field offices oversaw about $8.5 billion in rent subsidies payments in fiscal year 2003, or 47 percent of the total. We met with the four performance-based contract administrators responsible for administering project-based Section 8 contracts in these HUD field office locations. We also obtained and reviewed HUD policies and procedures related to the implementation of RHIIP initiatives and RHIIP status reports. To determine the status and impact of HUD’s effort to simplify the subsidy determination process, we reviewed relevant laws and HUD regulations. We also estimated the potential impact on tenant rents under possible approaches using data HUD had collected for the update to its 2001 report, Quality Control for Rental Assistance Subsidies Determinations. Specifically, we compared the difference between the amount of rent paid by tenants (as identified in HUD’s data) and the amount tenants would pay under the two simplification approaches. We interviewed officials at HUD headquarters and field offices and at state and local agencies that administer HUD’s rental assistance programs. We also met with industry groups representing state and local housing agencies and tenants. These groups include the National Association of Housing and Redevelopment Organization, National Leased Housing Association, Public Housing Authorities Directors Association, and Massachusetts Union of Public Housing Tenants. We conducted our work from February to December 2004 in accordance with generally accepted government auditing standards. As part of the Rental Housing Integrity Improvement Project’s (RHIIP) error measurement effort, the Department of Housing and Urban Development (HUD) identified three sources of errors that resulted in improper rent subsidy payments: (1) incorrect rent subsidy determinations made by program administrators (program administrator errors), (2) unreported tenant income, and (3) incorrect billing or distribution of subsidy payments (billing errors). HUD conducted separate studies to look at the amount of improper rent subsidies attributable to each source of error for vouchers, public housing, and project-based Section 8 but was able to develop reliable estimates of dollar errors for only one of the three sources—errors made by program administrators in determining rent subsidies—for fiscal years 2000 and 2003. HUD paid an estimated $1.4 billion in gross improper subsidies in fiscal year 2003 as a result of such errors. This amount represents a decrease of 39 percent since fiscal year 2000. HUD officials stated that this decline cannot be attributed entirely to RHIIP because many of the activities under the RHIIP initiative were in their early stages of implementation in 2003. However, HUD officials indicated that their communications with program administrators about the importance of addressing improper payments probably led to voluntary compliance with HUD’s policies for determining rent subsidies and likely contributed to the reduction in improper payments. HUD reported that the department paid an estimated $191 million in fiscal year 2003 in gross improper rent subsidies due to unreported tenant income—an 80 percent reduction compared with fiscal year 2000. However, our analysis indicates that this figure is not reliable because of the small sample size it was based on and because meaningful comparisons between the 2000 and 2003 estimates cannot be made owing to differences in the methodologies used to calculate them. Finally, HUD does not have a complete and reliable estimate of billing errors for either fiscal year 2000 or 2003. HUD has identified three basic sources of errors that have resulted in improper rent subsidy payments: (1) program administrator errors, (2) unreported tenant income, and (3) billing errors. HUD conducted separate studies of each type of error to assess the magnitude of the problem and the progress that has been made in reducing them. HUD identified three basic sources of errors that resulted in improper rent subsidy payments. Program administrator errors are the broadest because, as figure 2 shows, this type of error can affect nearly all the critical dimensions of the process for determining rent subsidies. Program administrators are responsible for collecting information on household income, expenses, and composition to determine tenants’ eligibility to receive housing assistance and the size of the subsidies. In performing their work, program administrators may incorrectly determine rent subsidies by, for example, making calculation and transcription errors or misapplying allowed income exclusions and deductions required by HUD policies. Errors that result from unreported tenant income occur when tenants do not report an income source (either for themselves or another household member) to program administrators. According to HUD, these errors do not include cases in which the tenants reported all sources of income but not the correct amounts. HUD classifies these discrepancies as program administrator errors because program administrators are required to verify tenants’ income amounts through third parties, such as employers and public assistance agencies. Unreported income errors generally occur early in the process for determining rent subsidies, when the tenant first submits income information to program administrators (fig. 2). Although some tenants may not disclose all income sources in order to qualify for assistance and to increase the rent subsidies they receive, tenants may also fail to report income sources unintentionally if program administrators provide unclear instructions. Finally, billing errors occur at the very end of the process for determining rent subsidies (fig. 2). The procedures used by program administrators to bill HUD for subsidy payments vary for each of the three rental assistance programs, and as a result the specific types of mistakes that lead to billing errors can also vary. However, in general, billing errors arise when discrepancies exist between the amount of a rent subsidy determined by the program administrator and the amount that is actually billed to and paid by HUD. Billing errors can also include accounting discrepancies between amounts paid by HUD and a property’s bank statements and accounting records. As part of its error measurement effort under RHIIP, HUD planned to estimate improper rent subsidies attributable to each source of error. According to HUD, this effort was to allow the department to assess the magnitude of improper rent subsidies and the progress made in meeting RHIIP’s goal of reducing improper subsidies. To develop these estimates, HUD conducted separate studies on improper rent subsidies attributable to each source of error for fiscal years 2000 and 2003. (Information on the methodology and reliability of these studies is discussed later in this chapter.) About two years after HUD began estimating improper rent subsidies, Congress passed the Improper Payments Information Act of 2002, which mandated that federal agencies submit annual estimates of improper payments for at-risk programs. According to HUD, the department plans to continue updating its estimates in subsequent years in order to comply with the requirements of the act. HUD has reported its estimates in its annual audited financial statements and performance and accountability reports. There are a number of ways to describe the size and magnitude of improper rent subsidies. One way is simply the dollar difference between the actual rent subsidy HUD paid and the “correct” rent subsidy—that is, the amount of subsidy that would have been paid on behalf of the tenant if no errors had occurred. The dollar amount erroneously paid can be either positive or negative because errors can reflect subsidy overpayments or underpayments. The gross dollar error or gross improper payment reflects the sum of the absolute value of the subsidy overpayments and underpayments—that is, the total of all erroneously paid funds. Office of Management and Budget guidance recommends using the gross improper payment measure to indicate the overall accuracy of the income and rent determination process. A second indicator, net dollar error or net improper payment, takes into account whether the difference between the actual and correct rent subsidy amounts is positive or negative. This measure is a useful way of expressing the impact of errors on actual program expenditures because it accounts for the offsetting effect of subsidy over- and underpayments. To assess the accuracy of subsidy determinations made by program administrators, HUD collected data for fiscal years 2000 and 2003. HUD paid an estimated $1.4 billion in gross improper rent subsidies (consisting of an estimated $896 million in overpayments and $519 million in underpayments) as a result of such errors in fiscal year 2003. This amount represents a 39 percent reduction compared with fiscal year 2000. The voucher program accounted for about half of the fiscal year 2003 errors, and the public housing and project-based Section 8 programs each accounted for about a quarter. Between fiscal years 2000 and 2003, each of the rental assistance programs experienced substantial decreases in program administrator errors—50 percent for public housing and more than 30 percent for both vouchers and project-based Section 8. Despite these reductions, the data show an estimated $377 million net subsidy overpayment in fiscal year 2003 that reduced the amount of funds available to assist other families with housing needs. We estimate that HUD could have provided vouchers to 56,000 additional households in fiscal year 2003 with this amount. As part of its Quality Control for Rental Assistance Subsidies Determinations study for fiscal year 2000, HUD collected data on the subsidy determinations made by program administrators. HUD subsequently repeated the study, using data for fiscal year 2003. Each study collected data on over 2,400 randomly selected households participating in the voucher, public housing, and project-based Section 8 programs. The methodology involved reviewing tenant files, interviewing a sample of tenants to gather income information, verifying all sources of reported income, and recalculating rents and subsidies. HUD estimated the subsidy errors by identifying the sum of the discrepancies between the actual rent subsidies calculated by program administrators and the amounts calculated by the quality control study staff. The results were projected to the entire population of assisted households to develop a national estimate of total improper rent subsidies. Our analysis of the documentation and the data collected indicates that these studies provide a reasonably accurate estimate of subsidy determination errors made by program administrators. Our analysis of data that HUD gathered for its quality control study indicates that HUD made an estimated $1.4 billion in gross improper rent subsidies in fiscal year 2003 as a result of errors made by program administrators—about 39 percent less than the estimated $2.3 billion in fiscal year 2000. The voucher program accounted for the largest share of this amount—about 52 percent, or $731 million. Public housing and project- based Section 8 accounted for 22 percent ($316 million) and 26 percent ($369 million), respectively. Appendix I contains more detailed information on the amount of improper rent subsidies presented in this chapter. Each of the rental assistance programs experienced substantial reductions in gross program administrator error—50 percent for public housing, 35 percent for vouchers, and 32 percent for project-based Section 8 (fig. 3). These reductions exceeded HUD’s interim RHIIP goal of reducing improper rent subsidies resulting from these errors by 15 percent by fiscal year 2003. According to HUD, the reductions in gross improper subsidies cannot be attributed entirely to RHIIP. Many of the initiatives under RHIIP, such as the RIM reviews and the income verification system, were too early in their implementation to have had any direct impact on the reductions. However, HUD officials stated that its communications with program administrators about the importance of addressing improper rent subsidies and program administrators’ anticipation of increased monitoring by HUD probably led to voluntary improvements in internal control activities (such as increased supervisory reviews, testing of files, and staff training) and likely contributed to these reductions. In addition, some PHAs we interviewed had already begun improving their controls before RHIIP was established. Estimates of improper subsidies in future years may show whether further reductions can be made and sustained as the RHIIP initiative matures. Overall, we estimate that the median gross subsidy error per household was about $33 per month ($396 annually) for all the rental assistance programs (fig. 4). In addition to having the highest total gross rent subsidy error in fiscal year 2003, the voucher program had the highest median gross subsidy error per household, about $41 per month. The comparable figures for project-based Section 8 and public housing were $27 and $29 per month, respectively. The median dollar error per household for all the rental assistance programs decreased by about 18 percent, or $7, between fiscal years 2000 and 2003. The median dollar error per household for vouchers and public housing decreased by 27 percent and 24 percent, respectively, over that time period. Although the median for project-based Section 8 did not change, suggesting no improvement, the program experienced significant decreases in gross subsidy error for households that had the largest error in fiscal year 2000. Because of program administrator errors, HUD paid an estimated $377 million in net subsidy overpayments in fiscal year 2003, reducing the amount of funds that were available to assist additional households with housing needs. This amount reflects the difference between $896 million in estimated subsidy overpayments and $519 million in estimated subsidy underpayments (fig. 5). Total estimated subsidy overpayments have decreased by 64 percent since fiscal year 2000. As discussed earlier, calculating net improper rent subsidies permits estimates of the errors’ impact on actual program expenditures because the calculation accounts for the offsetting effects of estimated subsidy over- and underpayments. Because the overpayments exceeded the underpayments in fiscal year 2003, HUD was not able to use an estimated $377 million of its funding to assist needy low-income households. We evaluated the impact of the estimate on the number of households that could have been served if this amount had been available to subsidize eligible households with new vouchers. Based on the average national subsidy cost of subsidizing a voucher—about $6,720 annually, including administrative costs—we determined that HUD could have provided an additional 56,000 households nationwide with vouchers in fiscal year 2003—nearly the same number of households that are currently assisted with vouchers in the Los Angeles, California, area. HUD has developed a methodology to estimate the amount of rent subsidies the department has paid improperly due to tenants who did not report all sources of earned income to program administrators. Based on this methodology, HUD estimated that the department paid $191 million in fiscal year 2003 in gross improper rent subsidies due to unreported tenant income, but our analysis found that this figure was not reliable because of the small number of tenant files with unreported income that were used to make the estimate. In addition, significant differences in the methodology used to calculate the fiscal year 2000 and 2003 estimates means that any comparison between the estimates would be invalid. Finally, HUD’s methodology does not capture other potential types of unreported income, a limitation that would be difficult to overcome. HUD developed a methodology to estimate the amounts of rent subsidies the department paid improperly in fiscal years 2000 and 2003 because tenants did not report all sources of earned income to program administrators. HUD’s methodology identified unreported income sources by comparing the information reported by tenants in the quality control study database with the information reported by employers in federal wage and income databases. HUD first identified households that appeared not to have reported an income source and then took various steps to screen out “false positives” resulting from definitional and timing differences. For example, HUD program staff eliminated those cases involving unreported income sources, such as income from minors or training programs, that should be excluded from family income under HUD’s policies. HUD also eliminated cases if third-party verification showed that the income fell outside the period covered by the program administrator’s most recent income examination. However, the methodologies used for fiscal years 2000 and 2003 have two significant differences, and as a result any comparison between the two estimates would not be valid. First, according to HUD, individuals who conducted the study for fiscal year 2003 did substantially more follow-up work to reconcile discrepancies in income sources than those conducting the study for fiscal year 2000. As a result, the fiscal year 2000 estimate probably included more “false positives” and overstated the amount of improper rent subsidies HUD paid. Second, HUD officials stated that the staff used to conduct the study for fiscal year 2000 had less experience with housing programs than the staff used for the later study. The officials said that, as a result, the staff from the earlier study may not have known enough about HUD’s program policies to reliably determine whether tenants had or had not reported all of their income sources. While HUD’s Performance and Accountability Report for Fiscal Year 2004 states that the department paid an estimated $191 million in fiscal year 2003 in gross improper rent subsidies due to unreported tenant income, this figure is not reliable because the number of tenant files with unreported income that were used to make the estimate was small. Specifically, HUD identified 30 tenant files, or 1.2 percent of the 2,401 tenant files in the sample, with at least one unreported income source. HUD officials agreed that because of the small number of files used for the estimate and the large variances in the amounts of income that tenants did not report, the margin of error was so large that the estimate was not meaningful—that is, the actual amount of improper rent subsidies for this source of error could have been as low as zero or many times higher than HUD’s estimate. HUD officials stated that, even though the estimate may not be meaningful, the low incidence of tenants who did not report all sources of income could indicate that unreported income sources may not be a major problem. However, they also recognized that the low incidence is somewhat counterintuitive, given that tenants have an incentive to conceal income from program administrators, and it is possible that the methodology may not be adequately capturing the full extent of this problem. HUD indicated that to obtain a more precise estimate of dollar error would require a considerably larger sample, but that doing so would be difficult and costly. HUD also stated in its Performance and Accountability Report for Fiscal Year 2004 that gross improper rent subsidies from unreported income decreased by 80 percent from fiscal year 2000 to 2003. HUD recognized in the report that the apparently significant reduction was partly due to improvements in its methodology. However, as discussed previously, any comparison between the two estimates is not valid because of the limitations of the fiscal year 2003 estimate and the significant differences in the methodologies used for the two years. Neither of HUD’s fiscal year 2000 and 2003 estimates of improper rent subsidies from unreported tenant income accounts for the different types of problems that may exist with unreported tenant income, but overcoming this limitation would be difficult. According to HUD, because the study’s scope was limited to identifying sources of income that tenants did not report, the study did not evaluate differences in the amount of income reported by a tenant’s employer (and entered in the quality control study database) and the amount reported in the new hires database. As a result, HUD could not account for those tenants who may have colluded with their employers to underreport their income to program administrators. Some program administrators we interviewed stated that they believe such collusion may be a problem, but no systematic data are available to confirm how widespread it might be. In addition, HUD’s methodology does not account for cash income that tenants received but failed to report to program administrators. Some program administrators we met with said unreported cash income could be widespread but that data are not available to confirm the extent of the problem. Although collusion and unreported cash income are potentially significant problems, it is not likely that there is any satisfactory way of quantifying their extent. Furthermore, HUD officials do not believe that there is an effective way of accounting for these problems in its methodology. HUD did not produce complete and reliable estimates of the amount of billing errors in fiscal years 2000 and 2003 for the voucher, public housing, or project-based Section 8 programs. HUD attempted to estimate fiscal year 2000 billing errors for the voucher program and initially found about $1.5 billion in improper rent subsidies. However, after reviewing the results, HUD managers questioned both the study’s validity and whether staff involved in the study had sufficient knowledge of program policies and accounting practices that pertain to the billing process. As a result, HUD sent program experts to conduct additional fieldwork to confirm the estimate. The experts reexamined approximately $1.2 billion of the total $1.5 billion in estimated billing errors, found that the estimate was unsupportable, and reduced it by over 80 percent. Given the questionable and incomplete nature of the original billing error study for vouchers, HUD determined that the results were inconclusive and unacceptable as a baseline error estimate. For the public housing program, HUD did not attempt to estimate billing errors. HUD has begun to develop and implement a methodology to establish a statistically valid baseline of billing errors for fiscal year 2003 for vouchers and public housing. According to HUD, this effort will be completed by September 2005. For project-based Section 8, HUD estimated that approximately $100 million in gross improper rent subsidies were paid as a result of erroneous amounts billed to HUD and disbursed to private property owners in fiscal year 2003. This estimate was based on a small sample of 150 properties, and the concentration of errors in a small number of properties resulted in a large margin of error. However, according to HUD, the estimated amount of improper payments due to billing errors is relatively modest even at the high end of the error range. In its Performance and Accountability Report for Fiscal Year 2004, HUD acknowledged that it would need a sample six times larger to obtain normally accepted levels of estimation accuracy. In addition to providing technical comments that we incorporated where appropriate, HUD stated that our draft report did not fully present the impact of HUD’s efforts under RHIIP. For example, HUD stated that the draft report did not recognize the department’s outreach, guidance, and training efforts as contributing factors to the reduction in estimated improper payments. The draft report discussed HUD’s efforts under RHIIP, including guidance, training, and various outreach activities. The draft report also reflected the comments of HUD officials that program administrators’ anticipation of increased oversight and monitoring by HUD probably led to voluntary improvements in their performance. We added language to the final report to incorporate HUD’s view that these efforts contributed to the reduction. While we believe that HUD’s view is reasonable, the specific extent to which these efforts contributed to the reduction in estimated improper payments is not known. HUD disagreed with the draft report’s finding that the department has complete and reliable estimates only for one source of error. In particular, HUD described as “misleading” our statement that its fiscal year 2003 estimates of improper rent subsidies attributable to unreported tenant income and billing errors were unreliable because they were based on samples too small to produce accurate results, and questioned the need to measure these errors more precisely. HUD also said that the estimated “incidence of cases” where a tenant household did not report at least one source of income was 1.2 percent and that there was a 95 percent likelihood that the true incidence of such cases was between 0.1 and 2 percent. We do not believe that our draft report—which focused on the estimated dollar amount of improper payments due to unreported income rather than the estimated number of households with unreported income— was misleading. As the report stated, the margins of error for HUD’s estimates of the dollar amount of improper payments were many times larger than the estimates themselves. Furthermore, HUD itself acknowledged in its comment letter that a much larger sample would be necessary to make a more precise dollar estimate. Accordingly, we made no changes to this finding. The draft report did not intend to criticize HUD’s sampling methodology or suggest that HUD attempt to make more precise estimates, which, as HUD indicated, could be difficult and costly. In addition, the report recognized that the problems with the reliability of the estimates were due partly to the small number of households with unreported income in HUD’s samples. We revised the report language where appropriate to further clarify this point. The Department of Housing and Urban Development (HUD) has made several program-level efforts under the Rental Housing Integrity Improvement Project (RHIIP) initiative to address improper rent subsidies for its public housing and voucher programs. However, several factors hampered HUD’s implementation of these efforts. First, HUD instituted on- site Rental Integrity Monitoring (RIM) reviews to assess public housing agencies’ (PHA) compliance with HUD’s policies for determining rent subsidies, but these reviews, which are not a regular part of HUD’s PHA oversight activities, were poorly implemented due to, among other things, a lack of clear policies and procedures. Second, HUD began implementing a new Web-based tenant income verification system, which is expected to significantly reduce tenant underreporting of income despite having some limitations. Finally, the training and guidance HUD provided to PHAs on its policies for determining rent subsidies were not consistently adequate or timely. As shown in table 1, each of these efforts attempts to address sources of errors discussed in chapter 2 (i.e., program administrator, unreported tenant income, and billing error) that contribute to improper rent subsidies in the voucher and public housing programs. However, none of these efforts directly addresses billing errors. As noted previously, HUD does not have complete and reliable information on the extent to which billing errors are a problem for these two programs. According to HUD officials, RIM reviews are the first comprehensive reviews of PHAs’ tenant information files in more than 20 years. However, inadequate staff resources and competing work demands kept some HUD field offices from issuing reports in a timely manner or completing all of their other PHA oversight responsibilities. These and other factors have prevented HUD from determining the impact of its RIM review effort. Recognizing the importance of regular monitoring of PHAs, HUD is considering implementing some type of on-site monitoring of PHAs’ subsidy determinations on a permanent basis. To address weaknesses in monitoring and help reduce PHA errors in rent subsidy calculations, in June 2002 HUD field office staff began conducting RIM reviews as part of the RHIIP initiative. RIM reviews are on-site evaluations of PHA procedures for collecting and verifying income information from tenants and for calculating subsidies. HUD’s Rental Integrity Monitoring Guide (RIM Guide)—the department’s manual for conducting RIM reviews—instructs field office staff to (1) review a sample of tenant files and recalculate the tenant’s rent subsidy, based on information in the tenant file, to identify any subsidy miscalculations made by the PHA and (2) assess the PHA’s written policies and procedures to determine the underlying causes of these miscalculations. According to the RIM Guide, the field offices are required to report their overall findings— for example, violations of HUD policies, such as misapplied deductions and lack of third-party verification of tenant income—in writing to PHAs, along with a list of specific subsidy calculation errors they identified. The field offices must also track PHAs’ progress in addressing findings and correcting errors and provide technical assistance to PHAs, as needed. If a PHA fails to implement corrective actions or rectify errors found during a RIM review, HUD can sanction the PHA by withholding the voucher administrative fee or the public housing operating subsidy. HUD requires that the written report be sent to the PHA within 30 to 45 days of the end of the review. HUD field office staff completed 722 RIM reviews— the first of two rounds of reviews—between June 2002 and September 2003 (fig. 6). In April 2003, HUD began conducting a second round of RIM reviews at selected PHAs to confirm whether (1) the calculation errors identified during the first round of RIM reviews had been corrected, (2) those PHAs that were required to implement corrective action plans to address findings from previous RIM reviews had done so, and (3) the implementation of corrective action plans led to a reduction in subsidy calculation errors. From April 2003 through October 2004, HUD field offices conducted second-round RIM reviews at 363 PHAs (fig. 6). According to HUD and officials at several PHAs we met with, HUD did not routinely oversee subsidy determinations for public housing and voucher programs at PHAs before the RIM reviews began in 2002. According to HUD, prior to 1980 the department reviewed, among other things, PHAs’ management of their properties and their compliance with HUD policies and procedures. These reviews included an assessment of PHAs’ subsidy determinations but not at the same level of detail as RIM reviews. Starting in the early 1980s and continuing through the 1990s, HUD did little to oversee the subsidy determination process at PHAs and instead focused its resources primarily on assessing the PHAs’ physical and financial condition. Starting in 1998, HUD increased its oversight of the voucher and public housing programs by creating two management and performance assessment systems. The Public Housing Assessment System (PHAS) evaluates four aspects of PHAs’ operations—physical condition, financial condition, management operations, and resident satisfaction—but does not include an indicator for subsidy determinations. In contrast, the Section 8 Management Assessment Program (SEMAP) includes an indicator that requires PHAs that administer voucher programs to self-certify to HUD annually that they have correctly determined each household’s adjusted annual income—the basis for calculating rent subsidies. However, according to HUD, the limited scope of the reviews (SEMAP confirmatory reviews) field offices perform does not adequately ensure that PHAs’ self- certifications are accurate. In most cases, the sample used to confirm a PHA’s self-certification with SEMAP requirements is smaller than the sample reviewed as part of a RIM review. In addition, while PHAs selected for SEMAP confirmatory reviews are generally limited to those that are moving into or out of “troubled” status, RIM reviews cover a broader range of PHAs. Inadequate resources and noncompliance with review policies and procedures affected field offices’ efforts to implement RIM reviews. We examined 31 RIM review reports for 13 of the largest PHAs and HUD’s quality assurance reviews—evaluations of the field offices’ RIM reviews— of eight field offices. Our examination showed that limited resources and lack of clear and timely guidance from HUD headquarters contributed to inconsistencies in the way field offices interpreted the department’s policies and conducted RIM reviews. Officials from most of the HUD field offices we met with said that they did not have enough staff to conduct all of their first-round RIM reviews within the 5- to 7- month period established by HUD and still fulfill their other oversight responsibilities. Also, several HUD quality assurance reports showed that field offices had limited staff to perform the reviews. As a result of these resource constraints, some field offices had to use staff with little or no experience in monitoring PHAs to perform RIM reviews, issue their RIM review reports late, and postpone other monitoring activities such as inspections of troubled properties. The number of staff assigned to RIM reviews and the number of reviews per staff member varied among the seven field offices we contacted. For example, we found that the number of first-round RIM reviews per staff member ranged from 0.8 in New York City to 3.5 in San Francisco (table 2). The average figure for all seven field offices was two RIM reviews per staff person. Notwithstanding other factors—such as the size of the PHA reviewed—that might have affected the ability of field offices to meet RIM review timing requirements, we found that those field offices with a low ratio of staff to reviews were likely to issue their reports after the 30- to 45- day deadline. Recognizing that some field offices were having difficulty completing their RIM reviews within the 5- to 7-month time frame, HUD alleviated the burden at some of the field offices by assigning contractors or staff from other field offices to complete or assist with second-round reviews. For example, according to HUD, contractors completed 60 percent of the second-round RIM reviews assigned to the San Francisco field office. In addition, HUD relieved field offices of certain other oversight responsibilities to give them time to complete the RIM reviews within the required time frame. For example, HUD reduced the number of SEMAP confirmatory reviews field offices had to complete and allowed them to combine RIM and SEMAP reviews at larger PHAs. HUD did not provide clear, timely policies for RIM reviews. In some cases, the lack of clear and timely policies resulted in inconsistencies in the way field offices interpreted the department’s policies and conducted RIM reviews. The following are some examples of these inconsistencies: HUD did not clarify whether its policy on the use of outdated tenant income information applied to data obtained through HUD’s income verification system. The RIM Guide states that PHAs should not use documentation that is more than 90 to 120 days old to verify tenant- reported incomes. HUD policy also requires that PHAs use data from HUD’s income verification system if they have access to it. However, in conducting RIM reviews, some HUD field offices cited PHAs for not using data from this system, even though the PHAs had determined that the data were more than 120 days old. HUD changed its definition of a “systemic finding” while the RIM reviews were under way. Although HUD had initially defined a systemic finding as an error (such as a misapplied deduction) that represented 30 percent or more of the total errors identified at one PHA, the department later redefined the term to mean violations of policy that were made “consistently,” leaving the interpretation of “consistently” up to the field offices. Based on the RIM review reports we examined, we found that field offices had different interpretations. For example, one field office interpreted “consistent” as errors found in 15 percent or more of the files, while another field office interpreted it as errors found in 30 percent or more of the files. As of December 2004, HUD had not developed a policy on the extent to which PHAs should correct the calculation errors found in their tenant files. As a result, the field offices we spoke with had varying requirements, with resulting variations in the amounts of time and resources PHAs expended to address the errors. For example, according to the PHAs we spoke with, some field offices required that PHAs review and correct all of their tenant files for errors—in one case 17,000 files—while others required PHAs to correct only the files that HUD examined during the RIM reviews. HUD did not issue a policy on how to address PHAs’ disagreements with RIM review findings until May 2004, over 8 months after completing the first round of reviews and 13 months after the field offices began conducting the second round of reviews. Prior to the release of this policy, the field offices had each handled PHAs’ disagreements differently. Our review of 31 RIM review reports completed by seven of HUD’s field offices showed that the field offices did not consistently follow policies and procedures when conducting RIM reviews, analyzing the results of those reviews, and communicating the results of the reviews to PHAs. Specifically, we found that these field offices, contrary to HUD guidance, did not consistently provide appropriate support for each observation and finding—for example, by describing the problem, the reason for it, and its impact. Similarly, HUD’s quality assurance reviews of field offices’ RIM reviews revealed that several offices either had not supported their report findings or had failed to provide written reports to the PHAs. The RIM review reports we reviewed also did not demonstrate that the field offices we visited had a clear understanding of the difference between observations and findings. HUD had defined observations as deficiencies in performance that were not based on a regulatory or statutory requirement but that should be brought to the attention of the PHA. HUD defined findings as conditions that were not in compliance with handbook, regulatory, or statutory requirements. Fifteen of the 31 RIM review reports we reviewed mischaracterized one or more “findings” as “observations” or vice versa. Properly classifying findings and observations is important because HUD policy requires PHAs to implement comprehensive corrective actions for findings but not for observations. Finally, HUD’s RIM Guide stipulated that the field offices must provide a written report to the PHA no more than 30 days after the RIM review ended, but 18 of the 31 RIM review reports we reviewed were not released within the 30-day time frame. One PHA told us that it did not receive a report until 5 months after the completion of the RIM review and then only after PHA officials called HUD to request it. Incomplete and inconsistent data kept HUD from analyzing the results of RIM reviews to assess improvements in PHAs’ calculations of tenant subsidies and provide targeted oversight and technical assistance to PHAs to help them address specific errors. When the RIM reviews started in 2002, the department designed a database to collect information on the results of the RIM reviews, including the total amount of subsidy overpayments and underpayments, as well as the efforts PHAs had made to improve policies and procedures. According to HUD guidance, field offices must submit a report on subsidy calculation errors and systemic findings for each PHA to HUD headquarters within 30 days of receiving the PHA’s response to the RIM review report. However, as of November 2004, HUD had not entered data in many of the fields in the database. HUD officials attributed this problem to field offices that did not submit the data in a timely manner and to a lack of personnel to manage data collection and entry tasks. Even if the database were complete, HUD would not be able to perform a meaningful analysis of the RIM review data for most PHAs because of the changes it made to the criteria for selecting PHAs and tenant files. Because of these changes, HUD does not have comparable first- and second-round RIM review data for about 70 percent of the PHAs that it reviewed. Figure 7 shows the specific reasons why the data for PHAs were not comparable for the two rounds. HUD is considering conducting additional rounds of RIM reviews sometime in 2005 but has not made any decisions on how it will determine which PHAs should be reviewed and how often these reviews should be conducted. Currently, RIM reviews are not a regular part of HUD’s PHA oversight activities. HUD had initially intended to review each PHA one or two times to identify weaknesses in their policies and procedures for making subsidy determinations. According to HUD officials, they had not planned to implement routine monitoring of PHAs’ subsidy determination processes. However, HUD officials said that, based on the results of the RIM reviews, they recognize that routine monitoring of PHAs may be necessary to mitigate the risk of improper rent subsidies in the future. As a result, the department is now considering making permanent some type of on-site monitoring of PHAs’ subsidy determinations. For example, HUD officials said that they are considering incorporating RIM reviews into the existing performance measurement systems or conducting reviews at high- risk PHAs every 2 or 3 years. However, according to these officials, budget and staff resources will ultimately determine the extent to which the department is able to monitor PHAs in the future. To address tenant underreporting of income, HUD has implemented a new Internet-based income verification system that allows PHAs to compare income information they receive from tenants with income information employers report to government agencies. According to HUD officials, the system is intended not only to help PHAs detect unreported incomes but also to provide them with a more convenient and accurate way to verify tenant-reported information. HUD estimates that the system will yield savings of approximately $6 billion over a 10-year period for all of its rental assistance programs. Currently, the data in the system, which HUD obtained through agreements with state wage and income collection agencies, are available to 2,366 PHAs in 22 states. HUD continues to work to provide access for the PHAs in the remaining 28 states. To increase the effectiveness and efficiency of its income verification effort, HUD intends to replace the data from the individual state agencies with similar data from a single source, the National Directory of New Hires—a database containing quarterly federal and state wage data, quarterly unemployment data, and monthly new hire data reported by employers to state agencies and compiled by the Department of Health and Human Services. Congress passed legislation in January 2004 that grants HUD the authority to request and obtain data from this directory. In addition, HUD officials told us that Social Security income information, which PHAs currently access through an existing system, will eventually be accessible through this new system. According to HUD, regardless of the data source used, the income verification system does not capture unreported cash income and certain types of wages that may not be required to be reported to state agencies. In addition, income from unauthorized tenants (i.e., tenants who are not on the lease but who live in the apartment and help pay the rent) is not captured. However, some PHAs have developed ways to capture these types of income and recover improper subsidy payments. For example, several PHAs we spoke with have fraud detection units, and several have partnered with state and local agencies, including departments of labor and human services, to obtain welfare and other wage information. Although officials of most of the 14 PHAs we contacted said that they welcomed new tools such as the income verification system that would help them verify tenant incomes and more accurately determine tenant subsidies, several also expressed concerns that the wage and income data were too old to verify tenant income. HUD policy states that data used to verify income must be no more than 120 days old (or about 4 months) on the date of the tenant’s certification or recertification of eligibility. HUD estimates that the income verification data are approximately 3 months old. However, due to large caseloads—sometimes as many as 750 tenants per caseworker—the PHAs generally begin collecting tenant income information 3 to 4 months prior to conducting an annual meeting to recertify the tenant’s eligibility for housing assistance and recalculate the rent subsidy amount. As a result, verification data can be up to 6 months old on the date of recertification. HUD officials told us that they are aware of this problem and are working with the Department of Health and Human Services to improve the timeliness of the data in the National Directory of New Hires. HUD provided training and guidance to PHAs on topics such as how to calculate subsidies, improve quality control procedures, and comply with third-party income verification requirements, but these efforts were not always adequate or timely. For example, although HUD sponsored training for PHAs in January and February of 2004 in order to prepare PHAs for RIM reviews, the training took place after all of the first-round RIM reviews and 54 (15 percent) of the second-round RIM reviews had been completed (fig. 8). This training addressed program basics, including how to interview prospective tenants, verify tenant income information, and calculate rents. It also provided guidance to PHAs on developing policies and procedures that would prevent future subsidy calculation errors. According to some PHAs, had the training been held prior to the RIM reviews, they would have been better able to understand the basis for the RIM review findings and the corrective actions needed to address them. In addition, all of the 14 PHAs we spoke with said that they had sent a limited number of staff to the training because, for example, HUD had held only two training sessions— one in California and one in Florida. Some PHAs said that they did not have sufficient travel funds to send their staff to these locations. In addition to training, HUD provided technical assistance through a contractor to PHAs that were deemed high risk on the basis of their performance in the first round of RIM reviews. According to a HUD official, 10 PHAs received technical assistance from the contractor between October 2002 and April 2004. The technical assistance focused on areas such as organizing tenant files, verifying tenant incomes, and calculating rent subsidies. Finally, HUD updated or developed guidance for PHAs on how to correctly calculate rent subsidies and reduce errors. However, some of this guidance was released late in the RIM review process, contradicted other guidance, or did not provide enough information. For example, HUD did not revise its public housing guidebook—PHAs’ basic program reference—to reflect changes in program regulations until June 2003, a year after the RIM reviews began. In addition, HUD did not reconcile minor discrepancies between the voucher and public housing guidebooks on acceptable forms of third-party income verification until it issued detailed instructions on HUD’s income verification policies in March 2004. Until recently, HUD did little oversight of PHA’s subsidy determinations for the voucher and public housing programs. Although introducing SEMAP and PHAS in the late 1990s allowed HUD to better oversee PHAs’ performance, SEMAP provides only limited monitoring of PHAs’ compliance with HUD’s policies for determining rent subsidies, and PHAS provides none at all. HUD began implementing RIM reviews in 2002 but has not made the reviews a permanent part of its oversight activities. In the absence of regular monitoring, HUD cannot determine the extent to which individual PHAs comply with its policies for determining rent subsidies. Furthermore, although HUD conducted over 700 RIM reviews, it did not collect complete or consistent information from these reviews. As a result, HUD cannot assess PHAs’ performance over time or identify those that have made errors in determining subsidies and thus may require additional oversight and technical assistance. Further, the lack of complete and consistent information on the results of RIM reviews limits HUD’s ability to identify the factors that contribute the most to improper subsidy determinations and target its corrective efforts. To enhance HUD’s ability to reduce improper subsidies in its public housing and voucher programs, we recommend that the HUD Secretary take the following two actions: (1) make regular monitoring of PHAs’ compliance with HUD’s policies for determining rent subsidies a permanent part of HUD’s oversight activities and (2) collect complete and consistent information from these monitoring efforts and use it to help focus corrective actions where needed. HUD agreed with our recommendation that the department regularly monitor PHAs’ compliance with its policies for determining rent subsidies for the public housing and voucher programs and collect information from these monitoring efforts. HUD said that it recently updated its RHIIP plan to address this recommendation. However, in addition to providing technical comments that we incorporated where appropriate, HUD commented that the draft report did not adequately recognize the increase in HUD’s monitoring resulting from the RIM reviews or acknowledge that the scale of its monitoring efforts depends on the level of budgetary resources it receives. Specifically, HUD commented that the steady downsizing of the department’s staffing over the past decade had caused HUD to rely on remote monitoring systems, risk-based monitoring practices, and voluntary compliance by third-party program administrators. Our draft report stated that the RIM reviews represented a significant increase in HUD’s monitoring of PHAs compared with its efforts over the previous 20 years. Further, the draft report recognized that budget resources will ultimately determine the extent to which the department is able to monitor PHAs. The Department of Housing and Urban Development (HUD) has taken steps to implement Rental Housing Integrity Improvement Project (RHIIP) efforts for its project-based Section 8 programs but also faces several challenges. First, HUD has improved its policies and guidance for its project-based Section 8 programs and trained property owners, contract administrators, and HUD field office staff on their administrative and oversight responsibilities. However, a key part of the guidance calling for contract administrators to collect information on improper rent subsidies at each property was not widely followed partly because the data collection effort was not mandatory and duplicated some contract administrators’ existing procedures. Second, to improve verification of tenant income, HUD has gained access to a national database of employment and wage information. But HUD will not be able to use the database for its project-based Section 8 programs until at least fiscal year 2006 because of data security issues surrounding the disclosure of tenant income information to private property owners. Finally, to implement RHIIP’s monitoring effort, HUD plans to rely on performance-based contract administrators (PBCA) to monitor property owners’ compliance with HUD’s subsidy determination policies. HUD’s requirements for PBCAs call for extensive monitoring of the process for determining subsidies, but HUD may face challenges in ensuring that PBCAs follow these requirements. As shown in table 3, these efforts collectively attempt to address the sources of errors discussed in chapter 2 (i.e., program administrator, unreported tenant income, and billing errors) that contribute to improper rent subsidies in the project-based Section 8 programs. As part of RHIIP, HUD improved its project-based Section 8 guidance and training for property owners, contract administrators, and HUD field staff in order to improve their understanding of HUD’s policies for determining rent subsidies. Although HUD’s new monitoring guidance called for contract administrators to collect information on improper rent subsidies at each property, compliance with this guidance was limited. HUD’s handbook for project-based Section 8 sets forth the requirements and procedures that property owners must follow in administering these programs, including determining rent subsides. In May 2003, HUD revised this handbook to reflect regulatory and policy changes that have occurred since the last significant revision in 1995. The 2003 revision included updated information on tenant screening, eviction, and citizenship requirements, as well as a new method of estimating future medical expenses. Officials at four PBCAs and five HUD field offices we contacted generally agreed that the revised handbook represented a significant improvement over the previous one. To supplement the handbook, HUD established various resources, such as field office RHIIP coordinators, and a Web-based “help desk” that allows HUD to respond to questions about program policies submitted by HUD field office staff, contract administrators, and property owners. HUD also provided additional information on proper rent subsidy determinations and the RHIIP initiative. For example, HUD issued “fact sheets” on the rent determination process for property owners and tenants, which described tenants’ rights and responsibilities regarding income disclosure and third-party verification of income. HUD also issued periodic newsletters that included a description of the status of the initiative. In August 2003, HUD issued a new monitoring guide to help contract administrators improve their oversight of property owners’ subsidy determinations. HUD intended the guide to provide contract administrators with a consistent approach for identifying and recording errors in subsidy determinations during management and occupancy reviews. Management and occupancy reviews are detailed assessments of a property’s management, physical and financial condition, and compliance with program policies and procedures, including policies concerning the eligibility of tenants and accuracy of subsidy determinations. However, the new guide was not mandatory, and the contract administrators we contacted—including PBCAs and HUD field offices—said that they used the guide to varying degrees. HUD is currently revising its management and occupancy review policies, which include detailed procedures for assessing rent subsidy determinations. According to HUD, the revised policies, unlike the monitoring guide, will be mandatory for contract administrators. The revised policies are currently under departmental review, and the date of their implementation is uncertain. HUD accompanied these efforts with training for property owners, contract administrators, and HUD field offices on the updated handbook and new monitoring guide. HUD-sponsored training was primarily targeted to HUD field office staff and contract administrators and, according to HUD, nearly 2,000 individuals participated in 45 training sessions on HUD’s revised program handbook from June through December 2003. In addition, nearly 700 HUD staff and contract administrator personnel attended a satellite broadcast session on the revised program handbook and the new monitoring guide. Reaction to the HUD-sponsored training from the four PBCAs and five HUD field offices we spoke with was generally positive. Most of the PBCAs and HUD field offices indicated that HUD had done a satisfactory job of using training to emphasize the importance of properly determining rent subsidies. In addition to HUD-sponsored training, private training organizations, including professional training companies and housing industry groups, offered courses on project-based Section 8 program policies. For example, according to HUD, property owners, contract administrators, and HUD staff attended sessions on the revised program handbook, which covers HUD’s policies for determining rent subsidies. HUD officials stated that sessions on HUD’s program policies occur regularly. On the basis of a survey of major training organizations, the department estimated that nearly 10,000 property owners and contract administrators attended such sessions from June through December 2003. To monitor property owners’ compliance with HUD’s policies, HUD planned to collect information from contract administrators on the types and frequency of errors property owners made in determining subsidies. In the monitoring guide issued in August 2003, HUD recommended that contract administrators record subsidy errors identified during management and occupancy reviews and monthly voucher payment reviews in a uniform “tracking log.” However, for several reasons, the tracking log was not widely used. First, because the log was part of HUD’s recommended guidance and, therefore, not mandatory, HUD could not require contract administrators to use it. Second, according to some PBCA and HUD officials, some contract administrators found the log duplicative because they were already collecting much of the information, although not in a uniform manner. Finally, some HUD and PBCA officials said that the tracking log was problematic because errors caught during the voucher review process were generally rectified before property owners were paid and should not have been recorded on the log as subsidy errors. As noted previously, HUD is in the process of revising mandatory procedures for contract administrators to use in identifying and recording subsidy errors during management and occupancy reviews. According to HUD, the revised procedures will require contract administrators to collect uniform information on subsidy errors, as the tracking log was intended to do. Because these revised procedures apply only to management and occupancy reviews, they will not cover information on subsidy errors—including program administrator errors—found during monthly payment voucher reviews, which PBCAs already track separately. HUD plans to implement a Web-based income verification system for project-based Section 8, a key effort under RHIIP, after it addresses data security concerns. According to HUD, income verification systems are a critical component of the department’s efforts to reduce improper subsidy payments because these systems provide property owners with information necessary to independently check the accuracy of the incomes tenants report and identify any income source not reported by the tenant. As discussed in chapter 3, Congress granted HUD access to the National Directory of New Hires (new hires) database to verify tenant incomes in its rental assistance programs, including its project-based Section 8 programs, and required that HUD demonstrate to the Department of Health and Human Services that all necessary steps had been taken to prevent the inappropriate disclosure of information from the database before program administrators are given access. To alleviate concerns about releasing sensitive information to private property owners, HUD will initially make the data available only to public housing agencies (PHA) and confirm that the system is secure. If the Department of Health and Human Services is satisfied with HUD’s security precautions, HUD plans to make the data from the new hires database available to private owners of project-based Section 8 properties by fiscal year 2006. Once the system is implemented, property owners will be able to access earned income data from a secure Web site. In addition, HUD officials told us that Social Security income information, which property owners can currently access through an existing system, will eventually be accessible through the new system. HUD plans to rely on PBCAs to monitor property owners’ compliance with HUD’s policies for determining rent subsidies. For the past several years, HUD has been transferring contract administration responsibilities for project-based Section 8 properties from HUD field offices to the PBCAs but, due to resource constraints, has had difficulty monitoring the nearly 6,300 properties that are still the responsibility of field office staff. Although HUD’s requirements for PBCAs call for extensive monitoring of the subsidy determination process, HUD may face challenges in ensuring that PBCAs follow these requirements. Finally, HUD has continued to work with contract administrators and property owners to improve the completeness of tenant income information in a database used, among other things, to monitor property owners’ subsidy calculations. In 2000, prior to the start of RHIIP, HUD began transferring the administration of project-based Section 8 contracts from HUD field offices to PBCAs. As of October 2004, HUD’s project-based Section 8 program consisted of about 21,900 properties, and HUD had transferred contracts for about 11,800 of these properties to PBCAs. As of the same date, according to HUD, field offices served as contract administrators for about 6,300 properties, including 2,200 properties to be transferred to PBCAs sometime in fiscal year 2005 and about 4,100 properties with contracts that HUD will competitively source to a new contract administrator by the end of fiscal year 2005. HUD also plans to transfer about 3,800 additional properties to PBCAs that are currently the responsibility of “traditional” (i.e., not performance-based) contractors as these properties’ contracts come up for renewal. HUD has transferred contract administration responsibilities to PBCAs because its field offices lack the resources to adequately monitor properties. HUD requires PBCAs to perform annual management and occupancy reviews for all of their assigned properties and conduct monthly reviews of all payment vouchers submitted by property owners. In contrast, HUD field offices are not conducting the same level of monitoring for all of their 6,300 properties. For example, HUD conducted management and occupancy reviews for about 1,800, or approximately 30 percent, of these 6,300 properties in fiscal year 2004. According to HUD, the field offices did not perform annual management and occupancy reviews for all of these properties because of insufficient staff and funding. HUD policy also requires review of monthly payment vouchers for their properties. However, HUD’s Office of Inspector General (OIG) reported in its audit of HUD’s fiscal years 2002 and 2003 financial statements that the field offices were performing monthly voucher reviews for only about 2 percent of the vouchers for their assigned properties. According to HUD, traditional contract administrators also have generally not conducted management and occupancy reviews each year for all of their properties or routinely reviewed monthly vouchers submitted by property owners. HUD officials we contacted also said that although the department required that the traditional contractors perform management and occupancy reviews and voucher reviews, their contracts (unlike those with PBCAs) did not specify how frequently. HUD officials stated that, similar to HUD field offices, traditional contract administrators had concentrated their monitoring efforts on troubled properties. In fiscal year 2004, traditional contract administrators conducted management and occupancy reviews for 900, or 24 percent, of their assigned properties. HUD does not have data on the number of payment vouchers reviewed for properties with traditional contract administrators. By transferring more of its project-based Section 8 properties to PBCAs, HUD plans to increase oversight of these properties and meet RHIIP’s goal of reducing improper rent subsidy payments. According to HUD, the ongoing PBCA initiative precluded the need for HUD to implement a monitoring process for its project-based Section 8 programs similar to the Rental Integrity Monitoring (RIM) reviews for the voucher and public housing programs. HUD officials also said that, because of limited resources and the large number of project-based Section 8 properties, the field offices would not have been able to carry out a monitoring effort as extensive as the RIM reviews. (About 22,000 property owners administer project-based Section 8 programs, compared with about 3,300 PHAs that administer vouchers and public housing.) As noted previously, PBCAs are responsible for performing annual management and occupancy reviews for all of their assigned properties and monthly reviews of all payment vouchers. As part of these reviews, PBCAs are required to determine whether the owners have properly calculated subsidy determinations and independently verified tenant-reported information. As of October 2004, about 11,800 properties were assigned to PBCAs, and over 90 percent of these properties received a management and occupancy review. In reviewing payment vouchers, PBCAs must ensure that the tenant information in HUD’s databases is consistent with the requested payment amount. When errors are found, the PBCA must correct the voucher by the amount of the error. To ensure that the PBCAs meet HUD’s performance standards, HUD has developed a comprehensive oversight program. Specifically, HUD field office staff are required to review status reports provided by the PBCAs, conduct annual compliance reviews, and use the results of these reviews to determine the compensation PBCAs should receive. Implementing these oversight measures could pose challenges for HUD. For example, the OIG reported in its fiscal year 2004 financial statement audit of HUD that two of the four PBCAs it reviewed were not consistently verifying whether the project owner had properly calculated subsidy amounts and independently verified tenant-reported information. In addition, prior GAO work has shown that HUD has often not provided adequate oversight of contractors, a factor that in 2003 led us to designate acquisitions management as one of HUD’s major management challenges. According to HUD, ensuring the completeness of tenant data by enforcing HUD’s data reporting policy is a critical component of RHIIP that will enable the department to reduce the amount of improper rent subsidies. Contract administrators use HUD’s Tenant Rental Assistance Certification System (TRACS) to monitor property owners, including identifying discrepancies between owners’ payment voucher requests and the rent subsidy information. To perform their monitoring function effectively, contract administrators must ensure that property owners submit complete and accurate data in TRACS, as required by HUD policy. Since RHIIP began, HUD has improved the completeness of tenant data in TRACS. Specifically, according to HUD, the percentage of units in TRACS for which owners reported tenant income information (i.e., the reporting rate) increased from 88 percent in December 2003 to about 95 percent in October 2004. Properties with contracts administered by PBCAs had a higher average reporting rate, as of October 2004—over 95 percent—than properties administered by HUD field offices or traditional contract administrators. This is because PBCAs perform monthly voucher reviews for all payments and thus must ensure that the information in TRACS is complete. As of that same date, HUD field offices and traditional contract administrators, which conduct fewer payment voucher reviews, had average reporting rates of 85 and 75 percent, respectively. HUD has continued to work with contract administrators and property owners to improve TRACS information by enforcing the data reporting policy. In October 2004, HUD began notifying property owners that the department would withhold subsidy payments if tenant information was not provided for at least 85 percent of tenants. According to HUD, the department suspended subsidy payments for 10 noncompliant property owners in November 2004 and expects to suspend payments for another 1,800 owners in December 2004. HUD concurred with our finding that guidance for collecting data on the types and frequency of errors property owners made in determining subsidies was not widely followed and stated that it would revise its contracts with PBCAs to address this issue. HUD disagreed with a recommendation in our draft report that the department analyze data it has collected on program administrator errors by differentiating among types of contract administrators and use this information to determine whether additional efforts to reduce this source of error are needed in the project-based Section 8 programs. HUD’s letter characterized our recommendation as “expand the process” to provide for separate error rates, noting that sample sizes would need to be tripled to permit statistically valid comparisons, and questioning whether such an effort would be cost-beneficial. Recognizing HUD’s increasing use of PBCAs, our recommendation concerned only data that HUD had already collected and was not intended to expand the scope of future data collections. In light of HUD’s comments on the insufficiency of its existing data, we did not include this recommendation in our final report. Noting the relationship between its ability to monitor and the level of resources it is provided, HUD stated that it “remains to be seen” whether requested resources will be provided to achieve comparable monitoring levels of program administrators for all of its project-based assistance programs. We agree that budget resources will ultimately determine the extent of HUD’s monitoring. Further, prior GAO work has shown that HUD has not always provided adequate oversight of program intermediaries, a contributing factor to our designation of the department’s rental assistance programs as a high-risk area. As part of the Rental Housing Integrity Improvement Project (RHIIP), the Department of Housing and Urban Development (HUD) is considering ways to simplify its policies for determining rent subsidies. HUD has met with program administrators and other interested groups to discuss simplification approaches. However, HUD has not conducted a formal study on the impact of these approaches on tenant rental payments and program costs. According to HUD, a major reason for subsidy calculation errors is the complexity of the existing policies. For example, program administrators must determine tenants’ eligibility for 44 different income exclusions and deductions to determine their rent payments and subsidies. One key concern is the impact that simplification could have on how much tenants pay in rent. Specifically, some tenants could end up paying a larger share of their income toward rent if the income deductions and exclusions that currently provide additional rent relief to them are eliminated, although others could pay less under certain approaches. In addition, the transition to simplified policies could create confusion among program administrators and tenants in the short-term. As one of its efforts under RHIIP, and as mandated by The President’s Management Agenda for Fiscal Year 2002, HUD is considering various approaches for statutory, regulatory, and administrative streamlining and simplification of its policies for determining rent subsidies. According to HUD, simplification is a key part of the department’s long-term strategy for reducing the risk of improper rent subsidies that result from the complexity of HUD’s current policies. As of December 2004, however, HUD had not officially proposed any approach to simplification for all of its rental assistance programs. HUD intends to formulate a proposal early in calendar year 2005 after it meets with industry stakeholders. Because most of HUD’s policies for determining rent subsidies have a basis in statute, major changes to these policies would likely require congressional action. In order to reform program administration and control rising subsidy costs, HUD proposed legislative changes for the voucher program in its fiscal year 2004 and 2005 budget proposals through the Housing Assistance for Needy Families and the Flexible Voucher program, respectively. These two initiatives called for simplification of the voucher program’s policies, including those for determining rent subsidies. Specifically, the initiatives would have provided administering agencies with the flexibility to determine their own rent policies. However, Congress did not include either of these initiatives in HUD’s appropriations acts. In October 2004, HUD met with various program administrators and industry and tenant groups to discuss different approaches for simplifying HUD’s policies for determining rent subsidies and to gauge the extent to which program stakeholders support simplification. According to HUD, most of the participants agreed on the need for simplification and discussed how best to meet this goal. HUD field office staff, program administrators, and industry groups that we spoke with also generally agreed on the need for simplification. Specifically, all of the HUD field office staff we interviewed supported some form of simplification, and nearly all of the 14 program administrators we interviewed also supported simplification, but many were concerned about the impact on existing tenants. The major industry groups we met with were also supportive of simplification. The October 2004 meeting concluded with HUD considering performing more extensive analysis of the various approaches to simplifying its policies for determining rent subsidies. However, HUD has not determined when it will begin performing this analysis. Although prior to this meeting HUD staff had conducted preliminary internal analyses of the impact of certain simplification approaches on tenant rental payments and program costs, as of December 2004, HUD has not conducted a formal study on the possible impact of policy changes for consideration by policymakers. A 2001 HUD study characterized HUD’s policies for determining rent subsidies as “detailed, complex, sometimes ambiguous, and subject to relatively frequent legislative changes.” HUD field offices, program administrators, and industry groups we interviewed frequently cited the complexity of these policies as a concern and identified it as a major obstacle in reducing improper rent subsidies. For example, HUD’s current policies include 44 income exclusions and deductions that program administrators must consider when determining rent subsidies and tenants’ rental payments. The purpose of some of these income exclusions and deductions is to provide additional relief to certain tenants, such as elderly and disabled households with large medical expenses, by reducing the amount they contribute toward rent. Other income exclusions exist to counteract potential work disincentives—for example, increasing income resulting in higher tenant rental payments—in housing assistance programs. As an example, some HUD field office staff and program administrators we spoke with cited the earned income disallowance as a complex income exclusion. The earned income disallowance was initially established in 1990 by the Cranston-Gonzalez National Affordable Housing Act (Pub. L. No. 101-625) and was revised in 1998 by the Quality Housing and Work Responsibility Act (Pub. L. No. 105-276). The disallowance policy provides special treatment to families whose earned income increases as a result of (1) employment of a family member who was previously unemployed for one or more years or (2) participation of a family member in a family self- sufficiency or other job training program. Families that qualify under these provisions are not subject to increases in their rental payments (that usually occur if their incomes grow for other reasons) for a 12-month period known as the “full exclusion period.” The rent may be increased during the following 12-month period, called the “phase-in period,” but the increase may not be greater than 50 percent of the amount of the full rent increase that would occur otherwise. After completion of both the full exclusion and phase-in periods, tenant rent increases by the full amount. However, low-income tenants often have jobs with little security—that is, they move in and out of employment and training programs and their income may vary considerably from job to job. To account for this, HUD developed additional administrative guidelines. For instance, during the full exclusion and phase-in periods, the months for which a family can claim the disallowance do not need to be consecutive. Consequently, a household member can become unemployed and stop claiming the disallowance and then become reemployed in a later month and begin claiming the disallowance again. However, keeping track of when tenants are employed and the amount by which the income increased is difficult and adds a significant burden on program administrators. The process for determining rent subsidies is further complicated by the difficulty some program administrator staff may have in understanding and implementing HUD’s program requirements. According to multiple field office staff, program administrators, and industry groups we met with, program administrator staff responsible for calculating rent subsidies are often poorly paid, have large caseloads, and have limited education. These factors can contribute to misapplication of program policies that result in errors in subsidy calculations. In addition, these same groups commented that these types of positions have high turnover, and as a result it is difficult for program administrators to retain knowledgeable and experienced staff. As noted previously, HUD is considering various approaches for statutory, regulatory, and administrative streamlining and simplification of its subsidy determination policies. Regardless of the approach HUD ultimately adopts, a major concern is the effect that policy simplification will have on tenant rental payments. It is possible that tenants’ rental payments could decrease under certain simplification approaches. However, tenants could also see rent increases if, all other things being equal, the income deductions and exclusions that currently provide additional rent relief to them are eliminated. In addition, simplification of HUD’s policies for determining rent subsidies could be difficult to implement and could create confusion among program administrators and tenants in the short-term. HUD is currently considering three basic approaches to simplifying its subsidy determination policies: (1) income-based rents, (2) tiered flat rents, and (3) mixed approaches. Descriptions of these three approaches follow: Under an income-based approach, the tenant rental payment is set at a certain percentage of the tenant’s income. The rent subsidy covers the difference between the contract rent for the unit (or the operating cost for a public housing unit) and the amount that the tenant pays. A simplified income-based approach could involve a limited number of exclusions or deductions or none at all. For example, one approach could involve tenants paying 30 percent of their gross income in rent with qualifying tenants receiving standard deductions for special needs. A different approach HUD has considered would allow elderly, disabled, and working families to pay 27 percent of their gross income in rent while all others pay 30 percent. No other deductions or exclusions would be used in determining the subsidy amount under this approach. Under a tiered flat rent system, tenant rents would be calculated for several income bands—for example, low, very low, and extremely low income—and tenants would not see their rents adjusted as their incomes changed provided that their incomes remain within the same tier. This option is somewhat similar to that used at properties developed with Low-Income Housing Tax Credit assistance. Under the tax credit program, property owners reserve some of their units for tenants at or below certain income limits—either 50 or 60 percent of the area’s median gross income. The owners must restrict tenant rents in these units to 30 percent of the income limit, adjusted for the number of bedrooms. Under a mixed approach, HUD would give program administrators various rent structures to choose from, including income-based rents and tiered flat rents. This approach would give program administrators the flexibility to choose the method that best fits their community demographics and other factors. Currently, HUD’s Moving-to-Work demonstration program allows participating public housing agencies (PHA) to obtain exemptions from certain public housing and voucher program rules, including those related to the calculation of rent subsidies, and to design and test various approaches to providing and administering housing assistance. As long as the PHA serves substantially the same number of households that it served under the public housing and voucher programs, the PHA is free to design its own rent structure for its tenants. HUD plans to study PHAs’ experiences under the Moving-to-Work demonstration as a possible model for simplifying its policies. Regardless of which simplification approach is ultimately adopted, a major concern of program stakeholders is the effect that policy simplification will have on tenant rent burdens. Although changes to policies could result in some tenants paying less in rent, some tenants could end up paying more in rent if, all other things being equal, the current system of income exclusions and deductions that provides additional rent relief were eliminated. To illustrate, we analyzed the potential effects of using a simple income-based approach in which tenant rents are set at 30 percent of gross income. Based on our analysis of HUD’s data for fiscal year 2003, we found that tenants would see their rent go up by an average of $30 per month ($360 annually), or 16 percent. About 10 percent of these households would see their rents go up by at least $72 per month (or $864 annually). Elderly and disabled households and large families who currently benefit the most from HUD’s exclusions and deductions would be hit the hardest by the elimination of these income adjustments. To take these households into account, we also estimated the average change in tenant rents using an approach in which elderly, disabled, and working families would pay 27 percent of their gross income in rent, all others would pay 30 percent, and no other deductions or exclusions would apply. Again using HUD’s tenant data from fiscal year 2003, our analysis showed that this option would increase tenant rents, on average, by $16 per month ($192 annually), or 12 percent. About half of current tenants would see modest increases of less than $10 per month, and around one-quarter could see increases of at least $28 per month. In addition, the rents for about 25 percent of the tenants would remain unchanged or decrease under this approach. A more detailed study by HUD would be necessary to determine the impact of the other policy simplification approaches on tenants’ rental payments as well as on program costs. Simplification of HUD’s policies for determining rent subsidies may be difficult to implement and will have a direct impact on how program administrators conduct their work. Depending on the magnitude of program changes, program administrators—over the approximately 22,000 property owners and 3,000 PHAs–will have to retrain staff, update written procedures and administrative plans, and make potentially costly modifications to their software applications. Program administrators will also have to perform tenant outreach to explain changes to existing and new tenants. If HUD determines that these tenants would be protected from any increases in rent that result from simplified policies, program administrators would have to deal with the difficulties of treating existing and new tenants under different sets of policies. Furthermore, gradually phasing in rent increases for existing tenants would add additional complexities to the administration of the programs and require extensive regulatory guidance from HUD. These changes would likely take time and involve some trial-and-error before they are fully implemented. It is possible, at least in the short-term, that transitioning to simplified policies for determining rent subsidies would result in confusion among program administrator staff and errors in calculating rent subsidies. This problem is more likely if the changes made to program policies are comprehensive, requiring extensive retraining of staff. Because HUD is in its early stages of developing a policy simplification strategy and has not conducted a formal study of these issues, it is not possible to describe how HUD intends to address these difficulties. Although part of HUD’s long-term strategy to reduce the risk of improper rent subsidy payments under RHIIP involves simplifying statutory and regulatory policies for determining rent subsidies, the department has not conducted a formal study of possible simplification approaches. According to HUD and program administrators, existing policies are difficult to implement and have made the process prone to error. Many of these policies are intended to provide additional relief to tenants by reducing their rents under certain circumstances. However, HUD must weigh the degree of relief these policies provide against the administrative burden they create and the increased risk of error they generate. Because most current policies stem from specific statutes, simplifying them would likely require congressional action. In order to inform potential debate on this issue, policymakers will need to fully understand how simplification could affect the amount of rent subsidy errors, program administrators’ workload, tenants’ rental payments, and program costs. Regardless of the simplification approach that is adopted, HUD will face many difficulties in implementing the necessary policy changes. In particular, HUD will need to promote an efficient transition and assist program administrators in making the necessary adjustments to their procedures. To ensure that HUD’s rental assistance programs are administered effectively and that policymakers have sufficient information with which to consider potential simplification approaches, we recommend that the HUD Secretary study the possible impact of alternative strategies for simplifying program policies on subsidy errors, tenant rental payments, program administrators’ workload, and program costs. As part of the study, HUD should determine how it intends to implement proposed changes and indicate how the department would help tenants transition from the old to the new rent structures. HUD stated that our draft report did not mention legislative initiatives in its fiscal year 2004 and 2005 budget justifications—the Housing Assistance for Needy Families and the Flexible Voucher programs—to simplify the voucher program’s policies for determining rent subsidies. These two initiatives were primarily intended to reform the funding mechanism for and the administration of the voucher program but also would have allowed administering agencies the discretion to define their policies on tenant eligibility and for determining rent subsidies. We included a description of these two initiatives in our final report. HUD did not respond directly to our recommendation that the department study the impact of simplifying policies for determining rent subsidies but said that the report incorrectly stated that HUD has not conducted formal studies on or otherwise considered the effects of its program simplification proposals. HUD also stated that all of its proposals for simplifying subsidy determination policies had undergone extensive analysis. Our draft report did not state that HUD had not considered the effects of program simplification and, in fact, cited HUD’s efforts to analyze simplification approaches. Further, during the course of our review and in its technical comments on our draft report, the department provided us only an internal analysis of a single simplification approach, which, according to HUD, it is no longer considering. Moreover, HUD has not issued a study of any simplification proposal that analyzes the impact of simplification, explains how HUD intends to implement proposed changes and help tenants transition from the old to the new rent structures, and is available to policymakers. Because simplifying HUD’s policies for determining rent subsidies will likely require legislative changes, we continue to believe that a formal study will be essential to informing congressional decision making.
In fiscal year 2003, the Department of Housing and Urban Development (HUD) paid about $28 billion to help some 5 million low-income tenants afford decent rental housing. HUD has three major programs: the Housing Choice Voucher (voucher) and public housing programs, administered by public housing agencies; and project-based Section 8, administered by private property owners. As they are in every year, some payments were too high or too low, for several reasons. To assess the magnitude and reasons for these errors, HUD established the Rental Housing Integrity Improvement Project (RHIIP). In response to a congressional request, GAO examined the sources and magnitude of improper rent subsidy payments HUD has identified and the steps HUD is taking to address them, including efforts to simplify the process of determining rent subsidies. HUD has identified three sources of errors contributing to improper rent subsidy payments: (1) incorrect subsidy determinations by program administrators, (2) unreported tenant income, and (3) incorrect billing. HUD has attempted to estimate the amounts of improper subsidies attributable to each source but has developed reliable estimates for only the first--and likely largest--source. HUD paid an estimated $1.4 billion in gross improper subsidies (consisting of $896 million in overpayments and $519 million in underpayments) in fiscal year 2003 as a result of program administrator errors--a 39 percent decline from HUD's fiscal year 2000 (baseline) estimate. GAO estimates that the amount of net overpayments could have subsidized another 56,000 households with vouchers in 2003. HUD has made several efforts under RHIIP to address improper rent subsidies for its public housing and voucher programs. Rental Integrity Monitoring (RIM) reviews by HUD's field offices--on-site assessments of public housing agencies' compliance with policies for determining rent subsidies--are a key part of the initiative. However, GAO found that resource constraints and a lack of clear guidance from HUD headquarters hampered the reviews and that the field offices did not collect complete and consistent data, limiting HUD's ability to analyze and make use of the results. HUD has not incorporated RIM reviews into its routine oversight activities. HUD expects that a second effort, a Web-based tenant income verification system, will avoid an estimated $6 billion in improper subsidies over 10 years, but the system is not yet fully implemented. HUD has undertaken RHIIP efforts for its project-based Section 8 programs but faces several challenges. HUD has improved its policies and guidance for property owners. The agency also plans to give owners access to the Web-based income verification system by the end of 2006. HUD plans to rely more extensively on contractors to monitor property owners' compliance with its policies for determining subsidies. According to HUD, the complexity of the existing policies contributes to the difficulties program administrators have in determining rent subsidies correctly. For example, program administrators must assess tenants' eligibility for 44 different income exclusions and deductions. However, simplification will likely require statutory changes by Congress and affect the rental payments of many tenants. HUD is considering various approaches to simplifying policies for determining rent subsidies but has not conducted a formal study to inform policymakers on this issue.
The Employee Retirement Income Security Act of 1974 (ERISA) was enacted to better protect participants in private pension plans. Among other things, it established an insurance program, administered by PBGC, to protect the benefits of participants in most private defined benefit pension plans. PBGC was created as a government corporation under title IV of ERISA to encourage the continuation and maintenance of private pension plans, insure the pensions of participants in defined benefit plans, and maintain pension insurance premiums at the lowest level necessary to carry out PBGC’s obligations. PBGC is financed through premiums paid annually by employers that sponsor plans, investment returns on PBGC assets, assets acquired from terminated plans, and recoveries from employers responsible for underfunded terminated plans. Employers that sponsor plans control how much they contribute to their pension plans (subject to ERISA’s funding standards). These sponsors estimate plan liabilities on the basis of the characteristics of plan participants and assumptions about the anticipated experience of the plan, such as the expected retirement age and anticipated investment return. Each plan is required to file with the Internal Revenue Service (IRS) an annual report (form 5500) that lists, among other items, the value of the assets in the plan’s portfolio and an estimate of the plan’s accrued liabilities (the present value of future pension benefits that have been earned to date). Subtracting the estimated liabilities from assets indicates whether the plan is fully funded or has unfunded liabilities under ERISA’s funding standards. PBGC may terminate a plan with unfunded liabilities if the plan has not met ERISA’s minimum funding standards; if it will be unable to pay benefits when they are due; if it has made a lump sum distribution of $10,000 or more to a participant who is a substantial owner of the sponsoring firm, leaving the plan with unfunded nonforfeitable benefits; or if the possible long-run loss to PBGC is expected to increase unreasonably if the plan is not terminated. PBGC must terminate a plan when it determines a plan is unable to pay current benefits. Generally, a company in financial distress may voluntarily terminate an underfunded plan only if the employer is being liquidated or if the termination is necessary for the company’s survival. When a plan is terminated with insufficient assets to pay guaranteed benefits, PBGC takes over the plan: it assumes the plan’s assets and becomes responsible for paying a guaranteed benefit to participants. To do this, PBGC evaluates the plan’s assets and estimates the liabilities it will be responsible for paying. The unfunded liability calculated by PBGC may exceed the unfunded liability reported by the plan because PBGC uses different actuarial assumptions to value plan liabilities. The plan’s unfunded liability for guaranteed benefits then represents a claim against PBGC’s insurance program. The single-employer premium has two parts: an annual flat-rate premium of $19 per participant and an additional annual variable rate charge of $9 for each $1,000 of unfunded vested benefits. Before 1994, the variable rate premium was capped at $53 per participant. The RPA of 1994 phased out the cap, increasing premiums for many underfunded single-employer plans, and instituted changes to both improve plan funding and to require that more information be provided to plan participants. For single-employer plans terminating in 1998, the maximum guaranteed benefit for participants aged 65 is about $34,570 per year. The Multiemployer Pension Plan Amendments Act of 1980 reformed the multiemployer insurance program. Among the reforms under the 1980 act is the requirement that a firm that withdraws from a plan may be liable for a proportional share of the plan’s unfunded vested benefits—a withdrawal liability. Further, in the event of the bankruptcy of a participating firm, the remaining firms are required to assume the additional funding responsibility. According to PBGC officials, because the remaining employers have this funding responsibility, PBGC rarely takes over a multiemployer plan. Instead, if a multiemployer plan is unable to pay benefits, PBGC’s multiemployer insurance program provides financial assistance in the form of a loan to the plan to pay participants their guaranteed benefits. PBGC does not necessarily expect such a plan to be able to repay the loan. PBGC guarantees a portion of multiemployer plan pensions—up to $16.25 per month times the years of credited service up to a maximum of about $5,850 per year. The multiemployer premium is a flat $2.60 per participant per year. The multiemployer program’s maximum benefit guarantee has remained unchanged since 1980. An increase in the premium rates for either program would require congressional approval. PBGC receives no funds from federal tax revenues, but it is authorized under ERISA to borrow up to $100 million from the federal treasury. ERISA requires that PBGC annually provide an actuarial evaluation of its expected operations and financial status over the next 5 years. In its evaluation, PBGC actually presents three 10-year forecasts for its single-employer program to provide a longer-term view of the financial condition of the program under different scenarios. In addition, ERISA requires PBGC to develop, every 5 years, projections of the potential liabilities the multiemployer insurance program could incur to inform policymakers whether changes in the program’s benefit guarantee or premium might be necessary. PBGC’s financial condition has improved greatly over the past few years, and both of its insurance programs currently have a surplus. However, despite this improvement and increased funding levels among the plans PBGC insures, continued underfunding in some large plans remains a concern. Although the number of single-employer plans has declined, the number of participants has increased slightly. The number of multiemployer participants and plans has remained relatively stable despite a decline in the number of active workers in these plans. The single-employer program’s financial condition has improved significantly since 1993, and PBGC reported that the program achieved its first surplus in 1996. As shown in figure 1, the single-employer program moved from a deficit of $2.9 billion in 1993 to a surplus of $3.5 billion in 1997. Unprecedented returns on investments are a key factor contributing to PBGC’s improved financial condition. As of September 30, 1997, PBGC’s combined insurance programs had about $15.6 billion in assets available for investment—$9 billion from premiums and $6.6 billion in assets from terminated plans. Investment income, primarily from stocks and fixed-income investments, increased from $927 million in 1996 to almost $2.8 billion in 1997. PBGC’s annual rate of return on investments was 21.9 percent for fiscal year 1997 and averaged 14.4 percent over the past 5 years. The financial condition of the single-employer program has also been helped by continued economic growth and the lack of large claims over the past few years. Historically, PBGC’s financial condition has been affected by the financial failure of only a small number of relatively large firms. Claims from terminated underfunded plans and the growth in PBGC’s net liabilities have been concentrated over short periods of time and in specific industries. The largest claims came from 10 firms that terminated 46 plans in the mid-1980s and early 1990s. Claims from these 10 firms accounted for more than half the dollar amount of all PBGC claims from 1975 to 1997. The number of single-employer plans insured by PBGC has declined significantly since the mid-1980s; however, the number of participants has increased slightly. The number of plans fell by more than 50 percent, from about 112,000 in 1986 to about 43,000 in 1997. The decline in single-employer plans resulted mostly from terminations of small plans—those with fewer than 100 participants—and mergers of larger plans. Offsetting the decline in the number of small plans has been growth in the number of plans with 10,000 or more participants. As a result, the number of participants in single-employer plans increased slightly, from about 30 million in 1986 to about 33 million in 1997, despite the decline in the number of plans. The funding level of many single-employer plans has increased but underfunding, especially among a few large plans, continues. Using PBGC termination assumptions, about 45 percent of all plans were overfunded while 55 percent were underfunded as of the end of 1995. However, 70 percent of covered participants and 80 percent of vested liabilities were in plans that were at least 90-percent funded, according to PBGC assumptions. For underfunded plans, the average funding ratio (percentage of assets accumulated to pay vested benefits) increased from 74 percent in 1986 to 87 percent in 1996. Plans with funding ratios under 50 percent have accounted for 76 percent of PBGC’s claims since 1975, while plans with funding ratios of 75 percent or better have accounted for only 3 percent of PBGC claims. The amount of underfunding increased from about $15 billion in 1986 to about $64 billion in 1996, largely because of the decline in discount rates (over 3 percent) used by PBGC. Some plans that had previously been fully funded became slightly underfunded as a result of the decline in interest rates. The strong financial condition of these plans, however, improved the average funding ratio for all underfunded plans. The amount of overfunding in plans declined from $228 billion in 1986 to about $103 billion in 1996. Similarly, the average funding ratio of overfunded plans declined during this period from 165 percent to 117 percent, primarily because of the fall in interest rates and increases in plan liabilities. The enactment of more restrictive full funding limits in 1987 resulted in lower employer contributions to fully funded plans and contributed to the decline in funding ratios. Underfunding remains a concern because the underfunding of a few large plans or underfunding in several plans in certain industries poses a long-term risk to PBGC solvency. Most of the claims against PBGC’s single-employer program have come from “flat-benefit” plans that cover hourly workers in unionized companies. Unlike most other defined benefit plans, flat-benefit plans do not fully anticipate future benefit increases in their funding calculations. Because benefits are often increased at regular intervals as part of contract negotiations, new liabilities are added to the plan before old ones are fully funded, thereby leaving the plans chronically underfunded. Two features in the design of the pension insurance program have made it hard for PBGC to control the exposure it faces from underfunded pension plans. First, ERISA’s minimum funding standards do not ensure that plan sponsors will contribute enough so that if the plans terminate, they will have sufficient assets to cover all the promised benefits. Second, the premiums that PBGC charges pension plans do not fully cover the risks that PBGC assumes. These premiums do not insure plans against a specified and limited shortfall in assets but rather against any underfunding, up to the maximum benefit guarantee per participant, no matter how large. Thus, premiums are only partially exposure-related, which enables a sponsoring company to engage in practices that reduce the level of plan assets knowing that if the plans terminate before benefits are fully funded, the responsibility for paying guaranteed benefits will fall on PBGC. Despite PBGC’s improved financial condition, its current federal budgetary treatment may not adequately reflect the potential cost of the insurance programs. Previously, we reported that under the cash-based federal budget, PBGC’s annual net cash flows help reduce the annual federal budget deficit. However, PBGC’s growing liabilities (funded and unfunded) from the plans it insures increase the amount of its long-term commitment to pay pension benefits. Liabilities from plans taken over by PBGC and its exposure to future claims from insuring currently healthy firms—that is, the risk assumed by the government in general—are not recognized in the budget. If budget amounts were reported on an accrual basis, the long-term cost of the insurance commitment would be apparent at the time the insurance was extended. The Office of Management and Budget’s (OMB) risk-assumed estimate for future PBGC costs—that is, the portion of a full risk-based premium not charged to PBGC-insured plans—was approximately $30 billion at the end of fiscal year 1997. This estimate contrasts with the $21 billion to $23 billion of “reasonably possible exposure” that PBGC reported in note 9 of its 1997 financial statements.We have recommended that PBGC (and other agencies operating insurance programs) develop and provide cost information in the budget document on a risk-assumed basis, in addition to the cash-based budget information it currently provides. PBGC’s multiemployer program has been in surplus almost since the program was reformed in 1980 (see fig. 2). With assets of $596 million and liabilities of $377 million, the multiemployer program had a surplus of $219 million in fiscal year 1997, up from $124 million in 1996. The surplus had declined in recent fiscal years as the program incurred losses of $79 million in 1994, $5 million in 1995, and $68 million in 1996. The losses resulted primarily from the increase in PBGC’s allowance for uncollectible future loans for two plans. Since 1980, PBGC’s multiemployer program has provided approximately $35 million in loans to 19 plans. In 1997, the program provided about $4 million in loans to 14 plans. For about the next 10 or 20 years, PBGC estimates that about $361 million will be needed to cover future loans to the 14 plans currently receiving assistance as well as loans to other plans expected to require assistance in the future. Generally, PBGC does not expect that multiemployer plans receiving financial assistance will necessarily be able to repay the loans. In January 1998, however, the Anthracite Fund repaid $3.2 million in loans it received from PBGC during the 1980s. This plan became the first to repay a PBGC financial assistance loan. Overall, funding among multiemployer plans has improved since enactment of the 1980 reforms. In 1980, multiemployer plans as a group reported a funding ratio (ratio of accumulated assets of all plans to the sum of their estimated liabilities) of 77 percent. By 1994, the overall funding ratio had increased to 105 percent, and overfunding among multiemployer plans totaled about $12.6 billion. Similarly, the funding ratio of underfunded plans has also improved since 1980. The recent high rates of return on plan investments have reduced the level of underfunding in some plans despite lower interest rates. The average funding ratio in underfunded plans increased from 58 percent in 1980 to 80 percent in 1994. The amount of underfunding decreased from about $35 billion to $27.4 billion during the same period. The number of multiemployer plans and participants has remained relatively stable since the early 1980s. In 1980, approximately 2,000 plans covered about 8.3 million participants; in 1997, about 2,000 plans covered about 8.8 million participants. The distribution of multiemployer plan participants by industry also remained relatively unchanged. In 1980, the construction, manufacturing, and transportation industries had about 5.9 million participants, or 71 percent of plan participants. In 1994, these industries had about 5.3 million participants, or 65 percent of plan participants. The construction industry alone had 2.8 million participants. There has been, however, a substantial decline in the number of active workers in multiemployer plans because many of these plans are in declining industries that are hiring few new workers. But because many workers are retiring or are vested and moving to other employment, the number of covered participants has remained relatively stable. Multiemployer plan contributions are based primarily on two factors: (1) administrative expenses and “normal costs” (costs to fund retirement benefits that active workers accrue each year) and (2) costs of plan modifications or deviation of plan experience from expectations. Payments or credits for these latter costs are amortized over a period of between 15 and 30 years. However, as active workers retire, contributions for normal costs fall and payments for retirees’ benefits increase. If such retirements occurred unexpectedly or in large numbers, the plan’s financial condition could deteriorate. For an adequate contribution base (ratio of active workers to other participants), the plans primarily depend on new employers joining or existing employers staying in and hiring new workers. The rate of growth in active workers provides a measure of the ability of the plan to fund its liabilities. Further, this growth tends to be correlated with the health of the industry covered by the plan. Despite the improvement in multiemployer plan funding since 1980, some large plans remain underfunded and could pose a risk to the multiemployer program. In 1986, we reported that the multiemployer program was jeopardized by an eroding contribution base. The number of active workers in multiemployer plans declined from about 6.4 million in 1980 (almost 76 percent of all participants) to about 4.4 million in 1994 (just 54 percent of participants). A continued erosion in contribution bases could eventually cause some plans to be unable to generate sufficient income under current funding rules to pay benefits, thereby increasing the number of plans requiring loans from the multiemployer insurance program. However, in its 1996 report on the financial condition of the multiemployer program, PBGC reported that it expected the multiemployer insurance program to remain financially strong, even with the decline in the contribution base. Many of the multiemployer plans with sizable underfunding are in industries such as manufacturing and transportation, which may continue to experience further decline in the number of active workers. On the basis of 1993-94 form 5500 data, PBGC identified 50 multiemployer plans (about 3 percent of all insured multiemployer plans) with underfunding of about $21 billion. Underfunding is worsened by benefit increases obtained through collective bargaining. Given the declining contribution bases and continuing benefit increases, it could be difficult for the underfunded plans to substantially improve their funding levels. Pending legislation in the 105th Congress (S. 1501) would, among other things, increase funding and reporting requirements for multiemployer plans and prohibit benefit increases if a plan was less than 95-percent funded. Recognizing that less than 1 percent of participants in multiemployer plans projected to become insolvent have their benefits fully guaranteed, the legislation also would increase the annual maximum guaranteed benefit. It is difficult to isolate the effects of RPA, the 1994 pension legislation, on PBGC’s financial condition and plan funding levels from other important factors, such as the growth in the stock market or economic expansion. In addition to enhancing PBGC’s regulatory authority and increasing participant protection through broadened reporting requirements, RPA strengthened funding requirements for single-employer plans. For plans that are less than 90-percent funded, RPA increased funding in three ways: accelerating the funding formula for certain benefit increases, constraining the assumptions used for calculating minimum contributions, and adding a new solvency rule to ensure that plans can pay current benefits. A comprehensive analysis of the effects of RPA requires more recent plan data than are currently available because of the time lag in filing plan annual reports. Plans are not required to file form 5500 reports until 210 days after the close of the plan year, and IRS processing time requirements further delay data availability. Even when the necessary data become available, it will be difficult to determine the extent to which RPA alone contributed to the improved financial condition of PBGC and insured plans. However, an increase in PBGC’s premium income suggests that the legislation probably had a positive impact on PBGC’s financial condition. As figure 3 shows, premium income from single employers rose from $890 million in 1993 to $1.1 billion in 1996 and fell slightly in 1997. PBGC expects that premium income may further decline as the statutory interest rate under RPA, the interest rate used to calculate the underfunding on which premiums are based, increases. Also, around the year 2000, the measure of plan assets may change from an actuarial value to a generally higher fair market value. The expected increase in the ratio of plan assets to liabilities may reduce both the reported amount of plan underfunding and the variable premiums based on this underfunding. RPA also resulted in increased plan contributions. A PBGC official told us that some sponsors with large underfunded plans made more than the minimum required contributions to lower the amount of premiums they would have to pay. Also, some sponsors increased their plans’ funding ratios instead of having to report to plan participants that the plans were underfunded. Although PBGC’s financial condition has significantly improved over the past few years, risks remain from the possibility of an overall economic downturn or a decline in certain sectors of the economy, substantial drops in interest rates, and actions by sponsors that reduce plan assets. These risks could threaten the long-term viability of the insurance programs. Further, PBGC has only a limited ability to protect itself from risks to the insurance programs. An economic downturn could adversely affect PBGC’s financial condition. If such a downturn were to occur either nationwide or in those industries with mature underfunded plans (plans in which many workers are less than 15 years from retirement) and several large underfunded plans terminated, PBGC could be obligated to take on additional benefit obligations, which could drastically reduce its net financial position. For example, bankruptcies in the airline and steel industries during the past 15 years resulted in large claims against PBGC. Terminations of 10 underfunded pension plans by Eastern Air Lines and Pan American Airways resulted in about $1.3 billion in PBGC claims. Similarly, terminations of underfunded plans in the steel industry, including plans from Wheeling Pitt Steel, Sharon Steel, and LTV Republic Steel, resulted in almost $1.4 billion in claims. Terminations from these two industries alone account for almost half of PBGC’s total claims. PBGC estimates that its reasonably possible future loss exposure is primarily from single-employer plans in the steel, airline, industrial and commercial equipment, and transportation equipment industries. An overall economic downturn could have three effects on PBGC’s financial condition. First, more financially troubled companies might terminate their underfunded plans, resulting in increased claims against PBGC. Second, as plan terminations rose, PBGC’s premium base could erode, lowering premium income. Finally, a recession or a substantial decline in the stock market could adversely affect the value of and income from PBGC’s assets. (This could also occur for individual pension plans.) The value of PBGC’s assets and income from them could decline at the same time that claims from the increased number of plans taken over by PBGC raised benefit payments. The combination of lower premium income and greater benefit payments could limit PBGC’s ability to set aside investment assets to help meet its new obligations to pay future benefits and could require PBGC to liquidate some assets to pay expenses. If PBGC continued to draw down its asset base, it could eventually run out of assets. At that point, congressional action would be required if benefit payments were to continue. Interest rates play a major role in calculating the liabilities of pension plans and of PBGC. If the interest rates used in the calculations of liabilities were reduced, the value of plan liabilities would rise. If these rates were increased, liabilities would decrease. A lower interest rate would reduce the future returns on a given level of assets and require that the amount of assets be increased to ensure that all benefit liabilities could be paid.Lower interest rates increase (1) the calculated liabilities from plans administered by PBGC, (2) the number of ongoing underfunded plans, and (3) PBGC’s potential liabilities from ongoing underfunded plans. Over the past few years, lower interest rates have increased PBGC’s liabilities, but this increase has been offset by PBGC’s higher premium and investment income. Plan sponsors can shift unfunded liabilities onto PBGC in several ways. When negotiating with employees over compensation, sponsors having financial difficulty can increase pension benefits or relax early retirement penalties in lieu of increasing wages. Sponsors can then spread the payment for these actions over a period of up to 30 years. If the plan terminated after one or a series of benefit increases, PBGC could end up paying part or all of the unamortized liability. Other methods a plan sponsor can use to shift its pension liabilities onto PBGC are to (1) forgo making its required contribution to the pension plan either legally through IRS waivers or illegally, (2) sell a subsidiary with an underfunded plan to a financially troubled buyer, or (3) use the plan’s assets to pay business expenses. In each instance, PBGC would continue to insure the pensions of plan participants. PBGC would also insure these pensions if the sponsor failed to pay its premiums for PBGC coverage. PBGC’s inability to restrict claims, coupled with a premium structure that is only partially exposure-related, makes it subject to “moral hazard.” Moral hazard surfaces when the insured parties—in this case, plan sponsors—engage in risky behavior knowing that the guarantor will assume a substantial portion of the risk. Although legislative reforms have increased PBGC’s ability to monitor and take action against underfunded plans, and uncapped the risk-related component of its premium, plan sponsors experiencing financial difficulties are still able to shift some of their plans’ liabilities onto PBGC. PBGC has only limited ability to protect itself from exposure from underfunded pension liabilities. PBGC does not have the regulatory authority available to other federal insurance programs, such as the Federal Deposit Insurance Corporation (FDIC), to help protect itself from risks. Instead, PBGC uses moral suasion and negotiation to encourage improved funding. In fiscal year 1991, PBGC created a Corporate Finance and Negotiations Department to identify and work with sponsors whose plans posed a risk to the agency. Through this department, PBGC targets companies that represent the biggest risks to its insurance programs and negotiates additional plan protections when it identifies problems. For example, PBGC had published its Top 50 list of companies with the largest amount of pension plan underfunding, hoping that public identification of large underfunded plans and discussions with troubled sponsors would persuade them to take corrective action to better fund their pension plans. If negotiating with the companies that pose the greatest risk fails to improve their funding, PBGC can terminate these plans. In such cases, PBGC assumes responsibility for the plans’ liabilities either through agreement with the plans’ sponsors or through a court order. Even when PBGC can terminate a plan, it tries to avoid doing so because such action is onerous to all involved. For example, in terminating a plan, PBGC would incur a claim that it would have to pay; participants still working under the plan would stop accruing benefits, resulting in lower future benefits; and retirees whose benefits exceeded the maximum guarantee level, whose benefits were recently increased, or who were receiving supplemental benefits might have their benefits reduced. Further, the plan sponsor might spend time and money to try to protect its own assets from court claims filed by PBGC on behalf of the plan for missed contributions and on behalf of itself for the recovery of the unfunded benefit liability. In addition, the sponsor, if not already bankrupt, could become bankrupt. PBGC’s limited ability to protect itself from exposure makes accurately forecasting its financial condition especially important, because it gives PBGC and the Congress time to enact policy and legislative changes to improve the long-term viability of the insurance programs. However, PBGC’s current methodology for forecasting the financial status of its single-employer program is relatively unsophisticated and does not capture the high degree of uncertainty surrounding potential future claims. PBGC is already using an improved methodology for forecasting the financial condition of the multiemployer program. Currently, PBGC relies on extrapolations of its past claims experience and past economic conditions to develop 10-year forecasts of the single-employer program’s financial condition. The actuarial assumptions PBGC uses for these forecasts are consistent with assumptions used to prepare PBGC’s financial statements. Recognizing the weaknesses of its current single-employer forecasting methodology, PBGC is developing a new approach to forecast its exposure to future claims under a wide range of possible future economic conditions. The model, called the Pension Insurance Modeling System (PIMS), is designed to simulate pension funding and bankruptcy rates over a 30-year period. The model generates estimates of average expected claims and probability measures of the uncertainty surrounding the estimates under various economic and policy scenarios. PBGC, working with outside reviewers, has extensively tested PIMS over the past few years and intends to use PIMS as its forecasting tool beginning in fiscal year 1999. For its fiscal year 1998 annual report, PBGC plans to generate forecasts of its financial condition using both PIMS and its current methodology. PBGC will also continue to use PIMS for internal research. PBGC uses a different model for forecasting the financial condition of the multiemployer program. The Multiemployer Insolvency Projection (MIP) uses plan-specific historical data to determine whether a plan would become insolvent under a set of economic assumptions over a 15-year period. For those plans projected to become insolvent, MIP calculates the present value of the future financial assistance that would be required from PBGC. MIP is an improvement over PBGC’s earlier approach to estimating future multiemployer program liabilities. Previously, PBGC used a methodology developed for a review of the program after passage of the Multiemployer Pension Plan Amendments Act. This method relied primarily on collecting data on all multiemployer plans from 1980 to 1986, identifying plans with deteriorated financial condition that could lead to insolvency, and estimating the required PBGC financial assistance. MIP allows PBGC to examine the potential effects on the multiemployer program assuming that each plan’s recent history continues and to test the program’s ability to withstand a variety of economic and demographic changes. MIP is less sophisticated than PIMS and does not attempt to assign probabilities to plan insolvency. (See app. I for more detailed information on PBGC’s efforts to forecast its future financial condition.) PBGC has made improvements in administering its insurance programs. It is continuing to address systems and control weaknesses in its operations. It is also increasing its oversight activities and working with plan sponsors to reduce the administrative burdens on plans. Despite these improvements, opportunities remain for PBGC to enhance customer service while strengthening program integrity. Two areas of concern are the continuing backlog of benefit determinations and inadequate oversight of contractors. PBGC’s recent progress has occurred primarily in the areas of financial systems and internal control, plan monitoring, and cooperation with plan sponsors. For many years, and as recently as 1992, we reported that PBGC had not developed and put into place the necessary documentation and support for the techniques and assumptions used to estimate its future liabilities from terminated plans and from plans expected to terminate. As a result of the lack of documentation and support, PBGC could not substantiate the reasonableness of its actuarial assumptions and estimation techniques, and we were unable to evaluate the reliability of PBGC’s estimated liability. Further, PBGC had significant system and control weaknesses in its premium and accounting operations. For example, between 1988 and 1992, PBGC was unable to fully perform basic premium processing, collecting, accounting, and enforcement functions because its premium processing system was not modified in time to accommodate the variable-rate premium structure that became effective in 1988. PBGC also lacked an integrated financial system for processing financial data and preparing financial statements and instead relied on time-consuming and labor-intensive processes to support operations and financial/budgetary reporting. PBGC has made significant progress in addressing the systems and internal control weaknesses in its operations. By 1993, PBGC had substantially improved its valuation systems and internal controls for estimating its liability for future benefits, allowing us, for the first time, to express an opinion on its 1993 financial statements. PBGC has also taken steps to improve its premium processing system. In 1992, PBGC began limited manual processing to generate bills and subsequently collected almost $60 million owed for certain past-due premiums, interest, and penalties. PBGC instituted a new premium processing system in fiscal year 1996 and implemented a new automated reporting system in 1995 to generate quarterly financial information. PBGC has also improved its monitoring of underfunded, single-employer pension plans. Its Early Warning Program targets plans that pose the greatest risk to the agency because of underfunding. PBGC monitors over 500 companies, each with pension plan underfunding of at least $5 million. These companies represent 1 percent of all companies sponsoring insured plans but more than 80 percent of all plan underfunding. PBGC attempts to negotiate additional pension contributions and protections when it identifies transactions that could jeopardize plans. PBGC reported that in the last 6 years it negotiated more than 50 settlements that provided about $15 billion in new pension contributions and protections for about 1.6 million participants. Further, by closely monitoring significantly underfunded plans, PBGC is better able to estimate the amount of potential claims that plans represent and to act quickly to avoid additional losses before plans terminate. PBGC is expanding its cooperation with plan sponsors by improving customer service, providing regulatory relief, and negotiating rule-making. PBGC continues to audit a sample of fully funded, terminated plans to determine if participants received all of their guaranteed benefits under the plan. In 1997, these audits resulted in almost $4 million in additional benefits to about 4,900 participants. PBGC also has a pension search program to locate vested participants in plans it administers. In 1996, PBGC expanded the program to include a missing participant clearinghouse to help employers that are terminating fully funded plans locate all people who are owed benefits. In addition, PBGC is revising its premium compliance program and increasing the number of premium audits (to ensure firms are paying the right premium amount) while reducing the administrative burden on plans. Finally, in 1997, PBGC issued revised regulations developed in cooperation with the plan sponsor community for streamlining procedures for terminating fully funded plans. PBGC also worked with participant groups while revising its regulations for recovering PBGC benefit overpayments. Throughout its history, PBGC has focused primarily on paying benefits to participants of the plans it administers in a timely manner. Despite recent progress in more quickly finalizing takeovers of underfunded, terminated plans and reducing the backlog of participant benefit determinations, a large backlog of final determinations remains. Further, the backlog could quickly grow if a large number of terminations occurred, as PBGC experienced during the 1980s and early 1990s. In fiscal year 1997, PBGC issued 69,000 benefit determinations but has only completed determinations for participants in certain plans that terminated during the 1970s and 1980s within the last 5 years. PBGC is now issuing participant benefit statements for plans terminated in the early 1990s. However, an average of 8 years passes from the time PBGC takes over a plan until it issues final benefit determinations to participants. During this period, estimated benefit payments are made to participants. For a number of years, some participants are underpaid, while others are overpaid and are subsequently required to repay the overpayments. PBGC is streamlining the steps it takes when assuming responsibility for terminated plans and is implementing a new participant information system to facilitate more timely processing of determinations. PBGC has initiated these improvements in customer service, in part, because it projects that it will continue to assume responsibility for about 150 new plans with 50,000 participants each year. Another area of concern is the adequacy of PBGC’s oversight of contractors’ performance and reimbursements. PBGC has about 750 employees, but it relies heavily on services from contractors for actuarial, investment management, and legal support, as well as for administration of terminated plans. Of PBGC’s total budget of about $150 million, an estimated $80 million to $100 million is for contracting costs. Recognizing that PBGC uses many contractors in virtually all aspects of its operations, PBGC’s Inspector General has designated contractor procurement and performance as a critical audit area. The Inspector General carries out ongoing audits of PBGC contractors and has identified problems in contractor performance and questionable reimbursements. Previously, the OIG reported finding such problems as contractor accounting records that were inadequate to support billings, contractor noncompliance with contract provisions, and excess cost reimbursements. As the OIG reported, it is important that PBGC follow its procurement controls to ensure that contractor performance and reimbursement are properly monitored. PBGC has taken steps to improve its oversight of contractors. In fiscal year 1994, PBGC established a contract audit group, after having had no contract audit function for most of its history. PBGC reports that this group has completed audits of 79 contracts valued at approximately $315 million, resulting in savings of about $9.8 million. PBGC has also consulted with the OIG on performance and cost reviews of some field benefit administrators. At PBGC’s request, the OIG reviewed PBGC draft reports on field benefit administrators and found that the reports, especially concerning contractor performance, were a useful management tool. While PBGC’s financial condition has significantly improved, risks to the long-term financial viability of the insurance programs remain. Continued underfunding among some large plans poses a risk to the agency. PBGC also remains vulnerable to other risks, such as downturns in the economy, problems in certain economic sectors, and declines in interest rates. An economic downturn and the termination of a few plans with large unfunded liabilities could quickly reduce or eliminate PBGC’s surplus. Therefore, a continued focus on maintaining a strong financial condition is important in anticipating and addressing these risks. In addition, PBGC’s current methodology for forecasting the future financial condition of the single-employer program does not take into account the range of economic conditions that can result in plan terminations, nor does it measure the probability that such future terminations will result in claims. Given the historic volatility of PBGC claims, it is important that PBGC continue efforts to improve its methodologies for forecasting its future financial condition. The ability to anticipate large claims and their impact on PBGC is an important step toward ensuring PBGC’s long-term financial solvency. PBGC has made significant progress in addressing the financial systems and internal control weaknesses that had plagued the agency for many years. However, continuing to reduce the backlog of benefit determinations, while improving their timeliness, and improving oversight of contractors must be ongoing agency priorities if PBGC is to improve customer service and maintain the integrity of the insurance programs. The voluntary nature of the private pension system means that efforts to strengthen the insurance system should be properly balanced to encourage the creation and continuation of defined benefit pension plans—one of PBGC’s legislative mandates. However, PBGC and the Congress should be ready to respond to economic or other changes that could jeopardize PBGC’s long-term financial condition. Properly anticipating and responding to such changes in a timely manner could avoid the need for large premium increases or for general revenues from the federal government, while at the same time protecting the pensions of millions of workers. We obtained PBGC’s comments on a draft of this report. PBGC agreed with our findings that the agency continues to face significant risks, many of which are beyond the agency’s control, and that it must remain diligent in managing these risks. (See app. II for the full text of PBGC’s comments.) PBGC also provided technical comments, which we have incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies of this report to relevant congressional committees; the Executive Director, PBGC; the Secretary of Labor; and other interested parties. Copies will also be made available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Major contributors to this report include Francis P. Mulvey, Assistant Director; Michael D. Packard, Evaluator-in-Charge; and George A. Scott, Senior Evaluator. The Pension Benefit Guaranty Corporation (PBGC) is required to annually provide an actuarial valuation of the single-employer program’s expected operations and financial status over the next 5 years. PBGC has historically exceeded this requirement, providing three 10-year forecasts. In addition, PBGC is required to examine its multiemployer insurance program every 5 years to determine whether changes in the benefit guarantee level or premium are necessary. PBGC’s current unsophisticated forecasting methodology for its single-employer program is based on the agency’s claims experience and the economic conditions of the past 2 decades. Forecast A is a projection based on the average annual net claims over PBGC’s entire history and assumes the lowest level of future losses. For 1997, forecast A projects continuation of PBGC’s financial improvement, resulting in a surplus, in 1997 dollars, of $8 billion in 2007. Forecast B assumes a moderate level of future losses and is based on the average annual net claims of the most recent 11 fiscal years. Forecast B projects net income levels that will lead to a surplus of $6.9 billion at the end of 2007. Finally, Forecast C projects $2.1 billion in net claims each year, over the next 10 years, from a modest number of plans with small claims and the largest underfunded plans. This approach results in a projected $17.1 billion deficit at the end of the 10-year period. The assumptions used in making these projections are consistent with the assumptions used to determine the present value of future benefits in PBGC’s fiscal year 1997 financial statements. Assumed administrative expenses are consistent with PBGC’s submission to the President’s 1999 budget. PBGC is developing a model, the Pension Insurance Modeling System (PIMS), to forecast its future exposure to claims under a range of future economic conditions by simulating pension funding and bankruptcy rates over a 30-year period. PBGC plans to replace its current single-employer forecasting methodology with PIMS. PIMS simulates a series of dynamic relationships that characterize the growth of firm assets and liabilities, the number of plan participants, the assets and liabilities of the pension plan, and the normal cost associated with the plan. The pension plan and the sponsoring firm are treated as separate but related entities. The future financial condition of the firm and plan are interdependent and also dependent on current financial conditions, legal and regulatory restrictions, and the uncertainty of future economic conditions. Stochastic variables are used to model this uncertainty. The model simulates these dynamic relationships over a specified period of time. In order to forecast future expected claims, the model is run many times to produce a distribution of possible outcomes. This distribution provides an estimate of the average expected future claims and a measure of the probability that actual claims will be within a certain range around the estimate. PIMS uses numerous attributes of individual pension plans and sponsoring firms. The model is run using a stratified sample of firms. The PIMS database currently has data on 417 plans representing approximately 50 percent of PBGC’s liability and 50 percent of plan underfunding. Model results can be extrapolated to account for the entire population of plan sponsors. For each plan in PIMS, IRS funding requirements are modeled. The probability of firm bankruptcy is also modeled and depends on several factors, including firm size, industry, and firm characteristics. The initial assumptions used in the model are those of the plans’ actuaries as reported on the form 5500. In cases in which the model’s initial estimated liability for a plan differs from that on the form 5500, PBGC adjusts some of the model’s assumptions, data, or both so that the two liability estimates are consistent. Subsequent changes in year-to-year assumptions are determined by a subset of equations in the PIMS model. PBGC used its Multiemployer Insolvency Projection (MIP) model in its most recent 5-year examination of its multiemployer insurance program. The model includes plans with the largest unfunded liabilities (which account for approximately 80 percent of total multiemployer plan underfunding), the largest plans in terms of total liability, and all plans identified in PBGC’s 1994 financial statements as “reasonably possible” future insolvencies. For each plan, MIP projects such factors as the number of participants, contributions and other income, benefit payments, actuarial liabilities, assets, and funding requirements. The projections are made for 15 years on the basis of 1992 data and use 1 or more of 12 sets of assumptions, such as expected retirement age (the age at which active workers are assumed to retire), annual benefit rate increase, rate of return on assets and whether there is a decrease in assets, and influx of new workers into the plan. The model’s base scenario assumes a continuation of the plan’s recent experience and includes the plan actuary’s assumptions. Other scenarios change 1 or more of the model’s 12 sets of assumptions to determine the impact of more conservative or pessimistic conditions. Budget Issues: Budgeting for Federal Insurance Programs (GAO/AIMD-97-16, Sept. 30, 1997). Financial Audit: Pension Benefit Guaranty Corporation’s 1994 and 1993 Financial Statements (GAO/AIMD-95-83, Mar. 8, 1995). High-Risk Series: An Overview (GAO/HR-95-1, Feb. 1995). Private Pensions: Funding Rule Change Needed to Reduce PBGC’s Multibillion Dollar Exposure (GAO/HEHS-95-5, Oct. 5, 1994). Underfunded Pension Plans: Stronger Funding Rules Needed to Reduce Federal Government’s Growing Exposure (GAO/T-HEHS-94-191, June 15, 1994). Financial Audit: Pension Benefit Guaranty Corporation’s 1993 and 1992 Financial Statements (GAO/AIMD-94-109, May 4, 1994). Underfunded Pension Plans: Federal Government’s Growing Exposure Indicates Need for Stronger Funding Rules (GAO/T-HEHS-94-149, Apr. 19, 1994). Financial Audit: Pension Benefit Guaranty Corporation’s 1992 and 1991 Financial Statements (GAO/AIMD-93-21, Sept. 29, 1993). Pension Plans: Underfunded Plans Threaten PBGC (GAO/T-HRD-93-2, Feb. 4, 1993). High-Risk Series: Pension Benefit Guaranty Corporation (GAO/HR-93-5, Dec. 1992). Pension Plans: Hidden Liabilities Increase Claims Against Government Insurance Program (GAO-HRD-93-7, Dec. 30, 1992). Pension Plans: Pension Benefit Guaranty Corporation Needs to Improve Premium Collections (GAO-HRD-92-103, June 30, 1992). Financial Audit: Pension Benefit Guaranty Corporation’s 1991 and 1990 Financial Statements (GAO/AFMD-92-35, Mar. 2, 1992). Pension Plans: 1980 Multiemployer Pension Amendments: Overview of Effects and Issues (GAO/HRD-86-4, Feb. 13, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the long-term financial viability of the pension insurance programs, focusing on: (1) the financial condition of the insurance programs and trends in the plans they insure; (2) the impact the Retirement Protection Act of 1994 has had on the financial condition of the Pension Benefit Guaranty Corporation (PBGC) and insured plans; (3) risks to PBGC's solvency; (4) PBGC's efforts to forecast its future financial condition; and (5) PBGC's efforts to improve administration of the programs. GAO noted that: (1) PBGC's financial condition has improved significantly over the past few years; (2) the agency has had a surplus for the past two fiscal years, after having a deficit for over 20 years; (3) the single-employer program improved from a deficit of $2.9 billion in 1993 to a surplus of nearly $3.5 billion in 1997; (4) the multiemployer program has maintained a surplus since the early 1980s; (5) like that of PBGC, the financial condition of most insured, underfunded plans has also improved, but underfunding among some large plans continues to pose a risk to the agency; (6) the improved financial condition of both PBGC and the plans it insures has resulted from better funding of underfunded plans and economic improvements; (7) over the past decade, the number of insured single-employer plans has fallen by more than one-half, to about 43,000, because of the termination of many small plans; (8) the number of participants, about 33 million, has increased slightly because of an increase in the number of large plans; (9) the number of multiemployer plans and participants has remained relatively stable since the early 1980s; (10) the declining number of active workers participating in multiemployer plans could increase the level of unfunded liabilities and place increased financial burdens on the multiemployer program; (11) PBGC experienced an increase in premium revenue immediately following passage of the legislation that contributed to its improved financial condition; (12) despite improvements in PBGC's financial condition, risks to the agency's long-term financial viability remain; (13) factors beyond PBGC's control could increase plan underfunding and PBGC's liabilities by reducing the future returns on assets; (14) PBGC is developing a new single-employer program forecasting model designed to estimate the probability of bankruptcies and terminations of underfunded plans under various economic conditions; (15) in addition, PBGC has already improved its methodology for forecasting the financial status of the multiemployer program; (16) PBGC has also improved its techniques for estimating its liability for plans that are likely to require future financial assistance and is now more closely monitoring the companies with underfunded plans that represent its biggest risks; and (17) while PBGC has made progress, it is important that it continue its efforts to reduce the time it takes to assume control of terminated plans, improve the timeliness of final determinations of participants' benefits, and monitor the performance of contractors that assist PBGC in administering the insurance programs.
In managing the funds that flow through the federal government’s account, Treasury frequently accumulates cash due to timing differences in when borrowing occurs, taxes are received, and agency payments are made. Treasury often receives large cash inflows in the middle of the month and makes large, regular payments in the beginning of the month. In general, Treasury seeks to maintain low cash balances and repay debt whenever possible, as the interest earned on short-term investments is generally insufficient to cover additional borrowing costs. As fiscal agents and depositaries for the federal government, the Federal Reserve Banks provide services related to the federal debt, help Treasury collect funds owed to the federal government, process electronic and check payments for Treasury, invest excess Treasury balances and maintain Treasury’s bank account, the TGA, through which most federal receipts and disbursements flow. TGA funds are available for immediate disbursement and are one of Treasury’s most liquid investments. Over the past several decades, technological advances and global expansion have led to significant changes in financial markets. Lending institutions have developed greater capacity to increase returns and manage risks, and increased regulatory freedom has helped to spur new markets. Greater computer power and better telecommunications networks have reduced barriers that once limited investment opportunities. In particular, significant growth has occurred in the segment of the money market that includes the use of repurchase agreements, or repos. A repo is the transfer of cash for a specified amount of time, typically overnight, in exchange for collateral. When the term of the repo is over, the transaction unwinds, and the collateral and cash are returned to their original owners, with a premium paid on the cash. The repo market has become one of the largest segments of the U.S. money market and is used by government and private institutional investors to invest short-term excess cash. In the first quarter of 2007, the average daily volume of outstanding total repos was $3.6 trillion, according to information provided to the Federal Reserve by primary dealers that engage in repo transactions. Over $114.3 trillion in repo trades involving U.S. Government Securities were reported in the first quarter of 2007, with an average daily volume of approximately $1.8 trillion. Repos were used by the Federal Reserve as early as 1917 and play an important role in the conduct of monetary policy operations since the Federal Reserve uses repos to dampen transient fluctuations in the supply of reserves available to the banking system. For the past 20 years, large corporations have been shifting cash assets out of bank accounts into instruments such as repos, which have enabled them to increase the returns on their short-term cash assets with minimum risk to their funds. Electronic systems have increased the speed of repo transactions and expanded the range of investors that can participate. Innovative arrangements for accepting collateral in the repo market, specifically triparty arrangements, have reduced transactions costs, credit risks, and operational risks. In a triparty repo an independent custodian bank acts as an intermediary between the two parties in the transaction and is responsible for clearing and settlement operations. The triparty structure typically reduces costs, minimizes operational and credit risks, and has the potential to increase returns. The Federal Reserve has been using triparty arrangements for its repos since 1999. Treasury’s operating cash balance fluctuates according to a predictable pattern although the swings in daily cash balances have grown larger in recent years. Before Treasury invests any portion of its operating cash balance, Treasury generally targets a $5 billion balance in the TGA. Treasury seeks to maintain a balance in the TGA large enough to protect against overdraft and attempts to keep the balance stable to avoid interfering with the Federal Reserve’s implementation of monetary policy. Balances held in the TGA earn an implicit rate of return. Patterns in receipts and disbursements cause frequent but predictable swings in federal cash balances, which regularly provide Treasury with cash available for short-term investment. Treasury’s daily operating cash balance, the amount of cash remaining after receipts and disbursements are accounted for, averaged $26.4 billion in fiscal year 2006. The receipts Treasury uses to finance federal expenditures come primarily from two sources: (1) tax revenues from sources such as personal and corporate income taxes, payroll withholdings, or other fees the federal government imposes; and (2) cash borrowed from the public through Treasury’s regular auctions of debt securities. Treasury’s daily operating cash balance is generally lower at the beginning of each month due to mandatory expenditures and then rises in the middle of each month upon the arrival of Treasury’s scheduled receipts. (See fig. 1.) Treasury’s cash balances also fluctuate depending on the time of year, with mid-month increases that are particularly large in January, March, April, June, September, and December. Treasury receives major corporate or nonwithheld individual estimated tax payments, or both, in these months, which significantly increases Treasury’s daily operating cash balance. Increases are highest in April, when Treasury receives and processes the prior year’s individual income tax liability settlements and the first estimated payments of the current tax year from individuals and calendar year corporations. Large payments for programs such as Medicare, Social Security, federal retirement, and veterans’ compensation frequently occur during the first 3 days of each month, significantly lowering Treasury’s daily operating cash balance at the beginning of each month. One quarter of fiscal year 2006 outlays were paid in the first 3 days of the month. Like the tax deposit schedule, the majority of the payment dates for these large benefit programs are statutory, which limits Treasury’s flexibility in cash management. In fiscal year 2006, Treasury’s average daily operating cash balance was $26.4 billion, an $8.5 billion increase from fiscal year 2003. (See table 1.) Swings in daily cash balances have also grown over time. Days with high cash balances—and hence significant amounts of short-term cash for investment—have more than quadrupled since 2003. (See fig. 2.) Cash balances tend to be highest at the end of the month before large mandatory payments are made. Over the past 3 years, cash balances have generally increased in both dollar volume and volatility for most parts of each month and for each business day of the week. Appendix I provides more details on these trends. Before investing any portion of its operating balance, Treasury generally seeks to maintain a stable $5 billion balance in the TGA to protect against overdraft. An overdraft of the TGA could occur if the anticipated receipts for the day fall short of expectation or if there are unanticipated disbursements. Treasury cannot risk an overdraft because the Federal Reserve is not authorized to lend directly to Treasury, in part to preserve the Federal Reserve’s independence as the nation’s central bank. Before 1988, as federal payments became larger and the volatility of Treasury’s operating cash balance increased, Treasury and the Federal Reserve increased the TGA target balance. According to Federal Reserve officials, improvements in the forecasting of receipts and expenditures have permitted them to not make any permanent increases to the TGA since 1988 despite continued increases in operating balance volatility. See appendix V for more detail on Treasury’s modifications to the TGA target balance since 1988. In the past, Treasury relied on compensating balances in depositary institutions as a source of liquidity on rare occasions. For example, in the week of September 11, 2001, Treasury pulled $12.6 billion from such compensating balances to cover a financing gap caused by the cancellation of a 4-week-bill auction. However, this source of liquidity has not been available since 2004. A stable TGA balance assists the Federal Reserve in its execution of monetary policy. If Treasury’s TGA balance exceeds or falls short of its target, the Federal Reserve must neutralize its effect on bank reserves through open market operations. See appendix V for more details on how the Federal Reserve injects or withdraws cash from the banking system in response to changes in the TGA. As shown in figure 3, in 2006 the TGA balance deviated more than 20 percent from its $5 billion target 17 times. In 9 of the 17 times, Treasury and the Federal Reserve had agreed in advance to target a balance other than $5 billion. Treasury and the Federal Reserve sometimes decide to target different balances for reasons that include increased volatility on major tax due dates and the facilitation of short-term reserve management. Although Treasury does not earn explicit interest on the TGA, it does earn an implicit return as part of the Federal Reserve’s weekly remittance to Treasury. However, the Federal Reserve told us that the amount cannot be easily identified. The implicit return Treasury receives depends on whether the purchases the Federal Reserve makes to offset the TGA balance are permanent or temporary. In a stable TGA target environment, such as exists today, the implicit return is roughly equivalent to the rate earned by the Federal Reserve on its portfolio of Treasury securities. For temporary increases in the TGA, the implicit return is roughly equal to the rate the Federal Reserve earns on its overnight repos. According to the Federal Reserve, the return cannot be isolated because it does not assign specific portions of its investment portfolio to the TGA. The Federal Reserve records the TGA on its balance sheet as a liability and offsets increases in the TGA by purchasing additional assets. While a higher TGA target balance would provide Treasury with increased overdraft protection and earn market rates of return, it could increase borrowing, which is costly whenever Treasury faces a negative funding spread. A negative funding spread occurs when the interest earned on cash balances is insufficient to cover the cost of the increased borrowing necessary to maintain these balances. Conversely, if the Treasury were to face a neutral or positive funding spread, increases would not be costly. When Treasury’s cash balances are particularly low, it may have to raise funds by issuing additional debt in order to maintain a stable and sufficient TGA balance. In order to maintain a stable TGA balance, Treasury must place operating cash above its $5 billion target in depositary institutions’ TT&L accounts or into other short-term investments. The three short-term vehicles currently used by Treasury subject Treasury to high concentration risks and have limited capacity. TT&L provides Treasury with an effective system for collecting taxes but subjects Treasury to concentration risk and offers low rates of return. To improve returns, Treasury established the TIO program in 2003, which provides near market rates of return but still subjects Treasury to concentration risk and does not alleviate Treasury’s capacity concerns. Treasury’s repo pilot, introduced in 2006, provides a third limited investment option. Treasury earned near market rates of return in the pilot, but because of its temporary status and limits in Treasury’s current legislative authority, the pilot’s features—including participants, collateral, trading terms, and clearing and settlement arrangements—are restricted and prevent Treasury from accessing the broader repo market. Table 2 shows the number of participants, investment terms, relative performance, and concentration risk of these three investment programs. The TT&L program provides Treasury with an effective system for collecting federal tax payments and helps Treasury meet its target balance in the TGA, but it subjects Treasury to concentration risk and earns a return well below market rate. In addition, the TT&L poses capacity concerns. In 2006, Treasury invested about 30 percent of its operating cash in TT&L deposits, with a daily average of $7.6 billion. TT&L Benefits: The TT&L program represents a collaboration between Treasury and over 9,000 commercial depositary institutions that collect tax payments, about 1,000 of which also hold funds and pay interest to Treasury. (See table 2.) There are three categories of participation: collectors, retainers, and investors. The majority of TT&L participants are collectors—they receive tax payments from customers and transfer the payments to Treasury’s account at the Federal Reserve. Retainers perform the same tax collection functions but may also retain specified amounts of the cash in an interest-bearing account until the money is called by Treasury. Investors not only collect and retain cash, but also may accept funds from Treasury though different investment options. In one of these options, the depositary institution agrees to accept automatic direct deposits from Treasury made hourly throughout the day in the event that Treasury cash receipts are greater than anticipated. These automatic deposits—known as dynamic investments—are an important part of the TT&L program because they are currently Treasury’s only option for placing late-day cash and helping Treasury to meet its target TGA balance. TT&L Participant Concentration: TT&L deposits are highly concentrated among a few large depositary institutions. For the past couple of years, Treasury has invested almost half of TT&L deposits with one depositary institution. Reasons for this concentration include consolidation in the banking industry over the last two decades and the lack of investment caps. In 2006, the five largest TT&L participants accounted for 66 percent of the total funds invested in TT&L accounts, up from 62 percent in 2005. (See tables 3 and 4.) This creates not only concentration risk but also capacity concerns. If one or two of the largest depositary institutions were to lower their TT&L balance limits or withdraw from the program entirely, Treasury’s investment capacity would fall far below that needed to accept the total amount of funds that Treasury needs to invest during peak tax collection dates. In addition, the number of depositary institutions participating in the TT&L program and thus willing to accept Treasury cash has decreased over the past few years. According to Treasury, at times it has been unable to place all of the cash it wished to invest in part because of a reduction in the number of TT&L participants. TT&L Rates of Return: The interest rate earned on deposits in retainer and investor accounts is fixed at the federal funds rate minus 25 basis points. TT&L deposits are an inexpensive source of funding relative to market alternatives for depositary institutions, but Treasury can withdraw certain funds on short notice and funds are subject to strict collateral requirements. See appendix II for a discussion of TT&L collateral requirements. When Treasury set the TT&L rate in 1978, it was a close approximation of the overnight repo rate, which Treasury considered an economically similar transaction. Treasury elected to use a proxy rate at the time because information on the daily overnight repo rate was not widely available. The repo market has grown considerably, and information about repo rates is now readily available. Since 1978 the spread between the federal funds rate and the repo rate has narrowed significantly from about 25 basis points to about 9 basis points in recent years. As a result, the spread between the TT&L rate and the overnight repo rate has grown larger, leaving Treasury earning a fixed rate on TT&L accounts that is well below market rates. (See fig. 4.) In July 1999 Treasury proposed changing the interest rate on TT&L deposits to align it with the overnight repo rate since Treasury viewed TT&L deposits as overnight investments, similar to repo transactions. However, financial institutions opposed the rate change; in 2002 Treasury modified the proposal and began exploring the short-term investment alternatives discussed later in this report, specifically TIOs and repos. Treasury’s TIO program, fully established in 2003, earns Treasury a higher rate of return than the TT&L program but shares the TT&L program’s concentration risk and Treasury’s capacity concerns in part because the same depositary institutions participate in both programs. TIO investments differ from TT&L deposits in two critical dimensions: (1) they are auctioned rather than placed at a fixed rate and (2) they are placed for a fixed number of days rather than being callable at will. Through the TIO program, Treasury auctions off portions of its excess cash at a competitive rate for a fixed number of days. The TIO program’s auction format allows Treasury to receive a competitive, market-based interest rate for its surplus cash. Meanwhile, the participating depositary institutions benefit from knowing in advance the exact amount and timing of the investment. Like Treasury’ deauction, TIO auction re ingle-rte auction where ll successidder receive the same rte. Depoitry intittion submit id pecifying the mont of cash they re intereted in nd the rte they re willing to py. Treasury rdnd eginning with the highet rte id throgh successively lower rte ntil the offering mont i filled. All successidder re rded their fnd t the loweccepted te, or top-ot rte, nd id rded t the top-ot rte re prorted. However, Treasury rd no more thn 50 percent of the totauction mont offered to ny one depoitry intittion. While depositary institutions have no control over when funds are deposited or withdrawn from the TT&L accounts, they know exactly how long TIO funds will be deposited, and through competitive bidding have more direct influence over the amount of funds that they receive. By 2006, approximately 60 percent of Treasury’s short-term investments were shifted into TIOs. In fiscal year 2006 Treasury invested $500 billion through TIO auctions. As of February 2007, 60 TT&L depositaries participated in the TIO program, up from 43 in 2004. The textbox provides additional details on how Treasury conducts TIO auctions. TIO Rates: TIOs earn a higher rate of return than TT&L deposits. In fiscal year 2006, TIO auction rates were on average 17 basis points higher than TT&L rates over the same terms, increasing Treasury’s gross return by approximately $20 million. The TIO rates were also about 3 basis points below Treasury’s benchmark for a market rate, which is based on repo rates of similar terms and collateral. There are variations among TIO auctions regarding the length of the term and the amount of cash offered that affect rates. According to a Federal Reserve study, TIO rates are most competitive for TIO term lengths of 5 days or greater, and the larger the auction size, the lower the TIO rate. TIO Participant Concentration: Although the TIO program has increased Treasury’s rate of return, it has not lessened its concentration risk, in part because TIO investors must be TT&L depositaries and they can receive up to 50 percent of funds offered by Treasury per auction. TIO investment concentration has increased in recent years. In fiscal year 2006, 50 percent of TIO funds were awarded to two depositary institutions, up from about 40 percent in fiscal year 2004. (See table 5.) TIO Collateral and Capacity: TIO collateral restrictions are similar to those in the TT&L program, and because depositary institutions participate in both programs, participants’ total capacity is divided between the two programs. Depositary institutions transfer collateral between the TIO and TT&L programs in order to participate in upcoming TIO auctions, which depletes the amount of collateral and capacity in TT&L accounts. According to Treasury, TT&L account capacity declined between 2001 and 2006, but capacity has shifted from TT&L accounts to the TIO program such that total investment capacity remained in line with the average capacity from 2001 to 2006. This shift of capacity from TT&L accounts to the TIO program presents challenges to using all of the capacity when there is a sudden and significant increase in Treasury’s cash balance (e.g., if the balance spikes up for only 1 or 2 days). There have been a few instances in the last few years in which Treasury has raised or considered raising the target Federal Reserve balance because TT&L accounts were close to capacity. Appendix II provides additional information on the types of collateral pledged in TIO auctions and how they are valued. Like the TIO program, the repo pilot provides Treasury with higher rates of return than TT&L deposits, but current legal restrictions and the pilot’s limited scope prevent Treasury from accessing a broader repo market. At $4 billion per day, Treasury’s repo pilot is small relative to the $1.8 trillion per day repo market. In March 2006 as part of its initiative to modernize its cash management program, Treasury began operating a 1-year pilot program to invest excess cash into repos, consistent with GAO recommendations. The objectives of the pilot were to (1) assess the effect of this type of investment operation on both Treasury and Federal Reserve operations, internal systems, and processes, and (2) explore the benefits of using repos to expand Treasury’s investment capacity and increase the return on invested funds. Initially there was only one participant; a second participant was added in August 2006. In the first 12 months of the repo pilot program, Treasury conducted 235 repo transactions, and invested $645 billion altogether. Treasury’s repo investments in the second half of fiscal year 2006 made up 11 percent of its total short-term investment balance. In that first year of the repo pilot, rates were on average 21 basis points higher than TT&L rates and earned close to Federal Reserve repo rates. In its evaluation of the pilot, Treasury found that it can effectively conduct repo transactions with a limited number of counterparties without adverse effect on its or the Federal Reserve’s operations, internal systems, and processes. Repo Participants: Under current law, Treasury is limited to investing its excess cash in depositaries maintaining TT&L accounts and in obligations of the United States. As a result, it cannot invest with securities dealers who play a prominent role in the repo market. The Federal Reserve conducts all of its repos with 21 securities dealers, who are selected based on their ability to make good markets, participate meaningfully in Treasury auctions, and provide market intelligence that is useful to the Federal Reserve in the formulation and implementation of monetary policy. In 2006, the Federal Reserve had an average daily balance of $25.3 billion in repos with selected securities dealers. Repo Term and Frequency: The repo pilot program offers only repos that have a term of 1 business day. Although this term comprises the largest share of the repo market, some participants invest in repos with longer terms. In addition, the repo pilot program conducts only a single daily auction at 9 a.m. Other repo participants conduct transactions throughout the day in the broader repo market, allowing them to place cash late in the day. Repo Bids: Bidding for Treasury’s repo pilot program is conducted by telephone, which is consistent with market convention for repos with a limited number of participants. Industry experts view telephone trading as an efficient way to conduct trades for offerings with a few counterparties. A greater number of counterparties may require an electronic trading system in order to prevent delays between the time rate quotes are made and accepted. Electronic trading systems also reduce trading costs and the risk of clearing errors. In 2006 the Federal Reserve upgraded to a new electronic trading system, FedTrade, to manage its repo trades with primary dealers. Treasury officials told us that they were exploring the capabilities of an electronic system similar to that used by the Federal Reserve and its application to an expanded repo program. Repo Collateral: Because of its current investment authority, Treasury only accepts Treasury securities as collateral in its repo pilot program. Participants in the larger repo market, including the Federal Reserve, accept a wider range of collateral types including mortgage-backed securities and U.S. government agency securities. Although repos backed by Treasury securities constitute the largest share of the repo market, there are some important limitations to demand for such repos. Most importantly for Treasury, the demand for repos backed by Treasury securities is lowest during times when Treasury has the most cash to invest. This happens in April and May, when, in response to high tax receipts, Treasury reduces the number of Treasury bills available in the market. Additionally, the rates received on repos backed by mortgage- backed securities and U.S. agency securities are typically higher than the rates for Treasury securities. Repo Clearing and Settlement: Clearing is the process of calculating the obligations of the counterparties to make deliveries of securities or payments of cash. Settlement is the transfer of cash and securities between the party and counterparty. For repo transactions, clearing and settlement are typically done through either a delivery-versus-payment (DVP) or triparty arrangement. In a DVP arrangement, as is used in the repo pilot program, the party and counterparty complete the clearing and settlement processes. In a triparty agreement, an independent custodial bank manages the clearing and settlement process. As illustrated in figure 5 below, in a DVP transaction, cash is transferred to the party, and the securities are delivered to the counterparty or its fiscal agent. The delivery of securities is done over a secure transfer system operated by the Federal Reserve Banks, which allows the transfer of certain types of securities such as U.S. Treasury and U.S. government agency securities. In triparty repos, both counterparties maintain accounts at a third-party custodian bank that facilitates the transfer of cash and securities between accounts. A broader range of securities can be used as collateral because the securities are already in accounts at the independent custodial bank. Treasury could increase its return on investment by continuing to reduce funds in TT&L accounts and reallocate those funds to a mix of TIOs and repos. In 2006, Treasury invested an average of $7.64 billion per day in the TT&L program. Treasury generally maintains at least $2 billion in the TT&L program as a means of maintaining active participation in the program. Retaining some TT&L banks to take direct investments as part of a broadened array of investment options would likely be advantageous for Treasury, by helping to provide Treasury with a more diversified set of investment options and by presumably increasing overall investment capacity. As illustrated in figure 6, during certain times of the year, Treasury has large balances in TT&L accounts earning a below-market rate that could instead be invested in an expanded repo program. If Treasury had invested TT&L funds in excess of the $2 billion floor in repo investments and earned the Federal Reserve’s overnight repo rate, we estimate that Treasury could have earned an additional $12.6 million in 2006. Investing in repos could also reduce the high levels of concentration and alleviate the limited capacity in the TT&L and TIO programs by accessing the almost $2 trillion broker-dealer repo market. In designing the operational elements of a permanent, expanded repo program, Treasury would need to consider industry investment practices in designing the program’s operational elements and managing risks that are associated with the selection of participants, collateral types, terms of trade, and trading arrangements. Since the repo pilot was conducted under current limited authority, Treasury did not have the opportunity to consider design decisions, such as we discuss in this section. In establishing a permanent, expanded repo program, Treasury would benefit from the insights gained in its repo pilot program and from examining recommended investment practices and federal regulations of other repo operations. Three sources of recommended short-term investment practices are the Government Finance Officers Association (GFOA), an organization that advises state and local governments’ finance officials, the Federal Reserve Policy on Payments System Risk, and the federal repo regulations issued by the Federal Deposit Insurance Corporation. Guidance for recommended short-term investment practices cite three primary objectives, in order of priority: (1) risk management, (2) liquidity, and (3) yield. Risk Management: According to the GFOA, the preservation and safety of principal is the foremost objective of short-term investments, which is accomplished by minimizing certain risks that are present in repo investments: (a) Credit Risk: The risk that a repo party will not fulfill its obligations to Treasury. (b) Concentration of Credit Risk: The risk of loss attributable to the magnitude of Treasury’s investment in a single party. (c) Custodial Risk: The risk that, in the event of a failure of a repo, Treasury will not be able to recover the full value of collateral securities that are in possession of outside parties. (d) Interest Rate Risk: The risk that changes in interest rates will adversely affect the fair value of Treasury’s investment. In a permanent repo program, Treasury will need to establish criteria to select counterparties to minimize exposure to credit risk, consider its overall exposure to each party and any of its related parent companies, and to monitor its exposure to interest rate risk. In determining with whom Treasury would be willing to conduct repos, Treasury would need to monitor the possibility of losses due to the high concentration of investments with a few participants. Specifically, Treasury would need to consider its overall exposure to each counterparty and any of its related parent companies and subsidiaries in its investments. To reduce interest rate risk, Treasury already requires TT&L participants to provide a greater amount of collateral than the amount of cash received. In a permanent repo program, Treasury will also need to monitor its exposure to market/interest rate risk that would arise from accepting a wider variety of collateral and investing at times for terms longer than overnight. Liquidity: Recommended investment practices related to liquidity are designed to ensure availability of funds when needed. The GFOA identifies two elements: (1) setting the term of some repo investments to mature when cash needs are highest and (2) having some repo investments that allow the investor to obtain cash on short notice without penalty. For Treasury, cash needs are greatest on or near the beginning of each month. The ability to obtain cash on short notice might be accomplished by engaging in overnight repos that can be rolled over every day. Treasury’s optimal mix of overnight and longer-term repos would depend on the patterns of Treasury receipts and cash available for short-term investments and on the timing and size of expected cash needs. Yield: An expanded repo program has the potential to improve Treasury’s return on investments relative to TT&L rates while maintaining current minimal risk investment policies. Treasury has already incorporated a recommended practice in its repo pilot program related to assessing the yield performance of a repo investment program. Specifically, Treasury compared the return on its repo pilot investments to an appropriate market benchmark. In designing a permanent, expanded repo program, Treasury should consider the investment principles cited above in its selection of participants, collateral types, trading processes, and clearing and settlement arrangements. Repo Participants: Expanding the repo program to include securities dealers, with whom Treasury does not currently invest, would increase Treasury’s investment capacity and could reduce the concentration risk found in the TT&L and TIO programs. In its evaluation of the repo pilot program, Treasury raised the possibility of expanding the range of parties to include the 21 securities dealers selected by the Federal Reserve to conduct its monetary policy operations. Whether Treasury uses the same criteria used by the Federal Reserve or develops its own criteria to select an acceptable set of counterparties, expanding to securities dealers would give Treasury greater access to the repo market and expand its investment capacity. Repo Collateral: Expanding the type of collateral acceptable in a permanent repo program could also increase Treasury’s return and investment capacity. Treasury would benefit from adopting the practice of other participants in the repo market, including the Federal Reserve, which accepts a wider range of collateral types, such as mortgage-backed securities and U.S. government agency securities. For example, the Federal Reserve selects from participant’s propositions across three different types of collateral. The rates it accepts depend on the attractiveness of participant bids relative to current rates in the financing market for each particular class of collateral. Repo Trading: Treasury should consider adopting an electronic trading system if it expands beyond a small number of participants to ensure transparency and fairness. Trading in Treasury’s repo pilot program is conducted by telephone, which is consistent with market convention for repos with a limited number of participants. However, a greater number of counterparties may require an electronic trading system in order to prevent time delays, lower the risk of operational errors, and reduce trading costs. According to Treasury, it is exploring the capabilities of an electronic system similar to that used by the Federal Reserve that would allow it to conduct repo operations with a large number of parties in a transparent and fair manner. The exact costs of such a system are currently unknown. Clearing and Settlement: Treasury should consider the advantages and disadvantages of adopting a triparty clearing and settlement arrangement for an expanded repo program. A triparty arrangement would reduce clearing and settlement costs, facilitate the expansion of collateral, and increase investment flexibility. According to an industry expert, the primary benefit of triparty arrangements is that the securities are held by a commercial clearing bank, which reduces risk and administrative work for both repo counterparties. For Treasury, triparty arrangements would reduce the expenses of monitoring, clearing, and settlement. Triparty arrangements would also facilitate the use of a broader range of securities for collateral because custodian banks can hold classes of securities that cannot be transferred over Fedwire. In addition, triparty arrangements would expand Treasury’s processing capacity, and allow Treasury to make additional repo investments later in the day to accommodate unanticipated excess cash. Although there are certain disadvantages to triparty arrangements, there may be options that Treasury could explore to reduce them. Unsecured intraday exposure may exist because there is a time lag between when cash from a repo transaction is transferred from the counterparty’s account and when the counterparty receives the collateral associated with the transaction. In addition, with a triparty arrangement, Treasury would not take possession of the pledged securities as its fiscal agent, the Federal Reserve, does in a DVP arrangement. According to Treasury, there may be a number of ways to mitigate these risks. See table 6 for a summary of triparty advantages and disadvantages. In the face of persistent federal deficits accompanied by growing net interest costs, and given the opportunities created by significant innovations in financial markets, further progress in Treasury’s short-term investment practices is possible. Treasury is to be commended for its efforts to modernize cash management that have resulted in higher returns on short-term investments while maintaining current minimal risk investment policies, but it is possible to do more. Our analysis shows that a permanent, expanded repo program could increase earnings while maintaining current minimal risk investment policies. Congress should consider providing the Secretary of the Treasury with broader authority in the design of an expanded program of repurchase agreements. Congress could note that it expects that in the selection of participants, decisions about acceptable collateral, and choice of other design features the Secretary will follow a process designed to mitigate various types of risks including concentration risk, credit risk, and market/interest rate risk. The decision not to legislate in detail how Treasury invests cash does not remove Congress’s oversight authority or responsibility. To assist Congress with oversight, the legislation could require the Secretary to report annually on the Treasury investment program. We recommend that the Secretary of the Treasury explore the reallocation of its short-term investments as discussed in this report and, if provided the authority to do so, implement a permanent, expanded repo program that would help Treasury meet its short-term investment objectives while maintaining current minimal risk investment policies. If provided the authority for a permanent, expanded repo program, Treasury should consider allowing broker dealers as counterparties and expanding acceptable collateral types to alleviate capacity concerns and increase rates of return. The effects on rates of return and operational efficiencies of an electronic trading platform and a triparty clearing and settlement system should also be considered. When making decisions about short- term investment programs, Treasury should follow a systematic process to identify and mitigate various types of risks including concentration risk, credit risk, and market/interest rate risk. Treasury should consider the costs and benefits of each alternative and determine whether the benefits to the federal government outweigh any costs. Treasury should also consider how its investment programs might be combined to produce outcomes that are more beneficial, and should consider the effect of its investments on similar Federal Reserve open market operations. We requested comments on a draft of this report from the Secretary of the Treasury. Treasury agreed with our findings, conclusions, and recommendations. The Fiscal Assistant Secretary’s letter is reprinted in appendix VII. Treasury also provided technical comments, which we have incorporated as appropriate. We also received technical comments from the Federal Reserve, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Chairman and Ranking Member of the House Committee on Ways and Means, the Secretary of the Treasury, the Chairman of the Federal Reserve Board of Governors, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Susan J. Irving at (202) 512-9142 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix VIII. We used publicly available Daily Treasury Statements to analyze the Department of the Treasury’s (Treasury) availability of cash during times of the month and days of the week during fiscal years 2003–2006. Our analysis shows that cash balances tend to be highest at the end of the month before large mandatory payments are made. Over the past 3 years, cash balances have increased in both dollar volume and volatility for most parts of each month and for each business day of the week. (See tables 7 and 8.) Treasury’s trend over the past 5 years has been to move cash available for investment out of the Treasury Tax & Loan (TT&L) Main Account and into Term Investment Option (TIO) offerings and recently into repurchase agreements (repo). Treasury piloted the TIO program in 2002, and the program became a permanent program in October 2003. The addition of the repo pilot program in March 2006 provided Treasury with an additional option for investment. (See table 9.) With the development of the TIO program and the repo pilot, Treasury’s investments in TT&L accounts have declined as it began placing more and more of its operating balance in these programs, particularly TIO since the repo pilot did not begin until March 2006. Specifically, the share of Treasury’s three investments (not including the balance in the Treasury General Account ) in TT&L accounts declined from 96 percent in fiscal year 2002 to only 36 percent in 2006. In contrast, the share of Treasury’s investments in the TIO program grew to over 60 percent by 2005 and remained the largest program by share of volume in 2006 at almost 60 percent. (See table 10.) In the repo pilot’s first 6 months, Treasury allocated about 11 percent of its total investments to the repo pilot on average. (See table 11.) It appears that Treasury primarily allocated funds away from TT&L and into the repo pilot rather than from TIO. TIOs as a percentage of total investments were down only slightly from 62 percent for 2005 to 60 percent for the first 6 months of the repo pilot, while TT&L deposits decreased from 38 percent to 30 percent over the same periods. This appendix provides additional information on acceptable collateral for the Department of the Treasury’s (Treasury) short-term investment programs. The first section discusses acceptable collateral in the Treasury Tax and Loan (TT&L) and Term Investment Option (TIO) programs. The second section discusses collateral distribution among Treasury’s short- term investment programs. In the third section, we describe Treasury’s Special Direct Investment (SDI) program, which provides additional capacity for Treasury in times when its operating cash balance is very high. Finally, in the fourth section we provide a table of “haircuts” that Treasury places on collateral depositary institutions pledged in exchange for Treasury funds. A haircut is the percentage that is subtracted from the market value of the collateral. The size of the haircut reflects the perceived risk associated with the pledged assets. See figure 8. Traditionally, Treasury has accepted a wide range of collateral in the TT&L program to ensure sufficient capacity and mitigate risk. To reduce risk, Treasury requires that a greater amount of collateral be pledged than the amount of cash received. Known as a “haircut,” the excess amount pledged may increase depending on the maturity, quality, scarcity, and price volatility of the underlying collateral. In the late 1990s, faced with budget surpluses and a lack of sufficient capacity in the TT&L program, Treasury expanded the range of TT&L collateral to include asset-backed securities and also agreed to accept commercial loans in less restrictive arrangements in its SDI program. Depositary institutions pay a uniform interest rate on all deposits regardless of collateral type for both regular TT&L investments and SDI investments. Treasury restricts assets pledged in the TT&L and TIO programs to nine collateral categories. (See table 12.) While any of the nine categories of collateral may be pledged to secure TT&L funds, collateral pledged in the TIO program is restricted to collateral types specified in the TIO auction announcement. Certain assets are not acceptable in any of Treasury’s short-term investment programs, such as mutual funds and obligations of foreign countries. (See table 13.) As discussed earlier in this report, collateral acceptable in the repo pilot program is restricted to Treasury securities. Table 14 shows Federal Reserve data on the relative use of different collateral types pledged for the TT&L and TIO programs. The repo pilot only accepts Treasury securities. According to the Federal Reserve, mortgage-backed securities make up 60 percent of the collateral depositary institutions pledged for TT&L funds. In the TIO program, commercial loans make up half of the collateral depositary institutions pledged to secure Treasury funds. (See table 14.) Forty percent or less of the collateral pledged in the TT&L and TIO programs is made up of acceptable collateral types other than mortgage-backed securities and commercial loans. To address capacity limits in its operating cash balance, Treasury added the SDI program in 1982. This provides Treasury additional TT&L capacity when operating cash balances are unusually high. While collateral used to secure Treasury’s cash in regular TT&L accounts must be held by a Federal Reserve Bank (FRB) or a Treasury-authorized FRB-designated custodian, in an SDI, the depositary institution may use collateral retained on its premises in what is called an off-premises collateral arrangement. Acceptable collateral in the SDI program includes student loans, commercial loans, and one-to-four family mortgages, the last of which is only accepted in the SDI program. SDI balances earn the same rate of return as TT&L balances and may be withdrawn at any time by Treasury. Since 2002, the number and dollar amount of SDIs have decreased, in part because of the establishment of the TIO program in 2003. (See fig. 7.) Collateral Value for Securities or (% of Market Value) Deposited /7 (% of Market Value) Balance) Balance) U.S. Government Guaranteed Agencies: U.S. Government Sponsored Enterprises: Collateralized Mortgage Obligations (AAA) U.S. Government Agency Guaranteed Loans Commercial and Agricultural Loans: 1-4 Family Residential Mortgages /6 * This document is for informational purposes only and subject to change without notice. It is not binding on either the Treasury or the Federal Reserve System (FRS) in any particular transaction. All pledged collateral must be transferable and owned by the depositary free and clear of all liens, charges, or claims. A detailed list of acceptable collateral for the TT&L Program can be obtained from Treasury's Bureau of the Public Debt website (www.treasurydirect.gov). Although the Department of the Treasury (Treasury) receives an implicit return on Treasury General Account (TGA) balances from the Federal Reserve, the TGA is not considered an official short-term investment vehicle. However, between 1974 and 1978 a number of circumstances forced Treasury to hold the bulk of its total operating cash balance in the TGA. Prior to 1977, Treasury Tax & Loan (TT&L) depositaries were not authorized to pay interest on Treasury’s deposits. At the time, Treasury placed cash in these depositaries, which provided a number of services, such as handling subscriptions to U.S. securities, issuing savings bonds, and processing Treasury checks. However, a number of developments between 1964 and 1974 brought an end to this practice. Tax receipts grew significantly, increasing the size of TT&L accounts. Interest rates had risen considerably, providing significantly greater earnings potential on TT&L balances. There was a decline in the number of Treasury-related services that banks performed. In addition, there was no correlation between the level of service a bank provided and amount of funds it received. As a result, it was possible for banks that provided only a few services to receive large TT&L deposits for which they paid no interest while other banks that provided numerous Treasury-related services received too little interest on TT&L deposits to offset their costs. In 1974 Treasury concluded that the benefits depositary institutions received from holding TT&L funds substantially outweighed the aggregate value of the services that these institutions provided. In order to recoup some of its lost earnings, Treasury pursued what it described as a “stop- gap” policy. Treasury moved all of the funds it reasonably could from its non-interest-bearing TT&L accounts to its Federal Reserve account, the TGA. In turn, the Federal Reserve acted to offset the drain on reserves caused by increasing the size of its securities portfolio. This then led to larger weekly remittances to Treasury. In 1976 Treasury estimated that it received $365 million in indirect earnings from the Federal Reserve in this way. This shift of placing almost all excess cash in the TGA created problems for the conduct of monetary policy by increasing the volatility of the TGA. The average weekly swings in the TGA balance more than doubled from $533 million to $1,388 million between 1974 and 1975. As a result, the Federal Reserve had to make frequent large purchases of securities in order to reinvest the funds that the TGA was absorbing from the banking system. On some occasions the Federal Reserve was unable to offset the large swings in the TGA balance through temporary open market operations, and it had to request that Treasury redeposit funds in the TT&L accounts to avoid having to make outright purchases of securities in the secondary market. In 1977 legislation was enacted authorizing Treasury to earn interest on its short-term investments. Treasury began investing a greater share of its operating cash balance in interest-bearing accounts at commercial banks in 1978, leaving a smaller stable amount invested in the TGA. ppendix IV: Timeline of Key Treasury for the Treasury Tax and Loan and vestment Option Programs Deprtment of the Treasury (Treasury) receive report on Federl Reerve Bnk ctual trsaction from the previousy, nd Treasury’s for Regionl Finncil Center report on the Atomted Clering House pyment tht will ettled ot of the Treasury ccont tht dy. Lockox intittion report the etimted mont of collection tht will e depoited in Treasury’s ccont tht dy. Offici from Treasury’s Office of Fil Projection (OFP) nd Cash Forecasting Diviion nd the Federl Reerve Bnk of New York’s Open Mrket Dek meet independently to clcte the d’s nticipted cash flow, inclding tx receipt nd disbument. OFP determine the Term Invetment Option (TIO) mont, the Dynmic Invetment mont, followed y the Revere Reprchase Agreement (repo) mont based on the etimted cash poition. Offici from Treasury nd the Federl Reerve compre their etimte of the next businesss’ nticipted cash flow nd decide wht dicretionry cash mgement ction need to e tken for Treasury to mintin itrgeted ccont bance t the Federl Reerve. The mger of the Stem Open Mrket Accont (SOMA) diuss the deciionde y Treasury’s nd the Federl Reerve’sash mger with other memer of the Federl Reerve Stem in order to determine whction the Federl Reerve hold tke in the open mrket. Depending on the nticipted level of reerve, the Federl Reerve either initite reprchasgreement to increase reerve or revere reprchasgreement to decrease reerve. Treasury egin processing same-dy invetment or withdr from intittions’ Min Accont nd Specil Direct Invetment (SDI) ccont. Notifiction of withdrl from intittions’ Min Accont or SDI ccont, or oth, pper in ctivity report y thi time. Intittion re notified of Direct invetment eing plced in their ccont y thi time ech dy. The Treasury Invetment Progrm (TIP) monitor intittions’ pledged collterl. All id for the d’s TIO auction re de. Treasury po the TIO auction result. Dynmic invetment of Treasury’s excessnd egin eing trferred to prticipting intittions’ ccont throgh TIP. Intittion receiving depoit pot the reqired collterl. Treasury depoit mont rded to ech bank into it repective reerve ccont. Treasury withdrnd held in TIO ccont with interet. Gray boxes indicate events that do not happen on a daily basis. While the Department of the Treasury (Treasury) has not made permanent changes to the Treasury General Account (TGA) balance since 1988, Treasury continues to adjust the TGA balance and modify its target balance to accommodate major corporate and tax due dates. (See table 15.) Treasury also seeks to keep the target balance stable to assist the Federal Reserve in executing monetary policy. If Treasury’s TGA balance exceeds or falls short of its target, the Federal Reserve must neutralize the change in overall reserves through market interventions. If Treasury has greater amounts of short-term cash than can be invested through other investment programs, the cash would have to be deposited into the TGA. If the TGA exceeded its $5 billion target, the Federal Reserve would have to inject large amounts of reserves into the market. On the other hand, insufficient funds in the Treasury’s total operating cash balance could cause the TGA to fall below its target, and the Federal Reserve would have to take reserves out of the system. (See fig. 9.) All depositary institutions in the United States are required to maintain a certain percentage of their customers’ checking account balances as reserves. A depositary institution with a temporary shortfall in reserves can borrow funds from an institution with a surplus of reserves on a short- term basis. The interest rate that banks charge one another for this short- term lending is known as the federal funds rate. By adding or draining the level of reserves in the banking system, the Federal Reserve is able to influence the supply of reserves and thus the federal funds rate, which in turn has a significant effect on a wide range of short-term interest rates and, ultimately, the economy as whole. The two most common operations the Federal Reserve uses to intervene in the market are outright securities purchases and repurchase agreements (repo). To address a permanent increase in the demand for reserve balances, the Federal Reserve purchases securities outright in the secondary market. When the Federal Reserve purchases securities, it credits the account of the security dealer’s depositary institution, thereby increasing the aggregate level of reserves in the banking system. Securities purchased in these operations are kept in the System Open Market Account, or SOMA, portfolio. Currently, the SOMA portfolio contains only U.S. Treasury debt. To make more frequent seasonal or daily adjustments to aggregate reserve levels, the Federal Reserve uses repos. To temporarily add (drain) reserve balances to (from) the banking system, the Federal Reserve makes a collateralized loan (borrows against collateral) for a period typically ranging from 1 to 14 days. For repo transactions, the Federal Reserve primarily accepts Treasury securities for collateral, but also accepts a small amount of federal agency securities. In fiscal year 2006, the Department of the Treasury (Treasury) invested a daily average of $12.4 billion in Term Investment Option (TIO) offerings, or almost 60 percent of its short-term investment balance. The rates earned through TIO investments were on average 17 basis points higher than the rates earned on Treasury Tax and Loan (TT&L) deposits over the same periods. We calculate that the value of this spread over the course of 2006 was about $20 million. To determine the value of this spread between TT&L and TIO rates, we compiled publicly available data on TIO auction award amounts, TIO auction rates, and average TT&L rates earned over the period of each TIO auction. Treasury conducted 103 TIO auctions in fiscal year 2006. To calculate the value of the spread between the TIO rate and average TT&L rate per auction, we first calculated the spread between the two rates for each auction. We then calculated the value of that spread in dollars by adjusting the rate for length of term, and multiplying it by the auction award amount. We then added up the spread value in dollars for each of the 103 auctions to obtain a total. (See table 16 below.) We estimate that if Treasury had earned an overnight repo rate on most of the funds that it invested in TT&L deposits in fiscal year 2006 instead of the TT&L rate, Treasury could have potentially earned an additional $12.6 million. Treasury generally maintains at least $2 billion in the TT&L program as a means of maintaining active participation in the program. We calculated that Treasury’s balance in TT&L accounts exceeded this minimum balance threshold in fiscal year 2006 on 276 calendar days by an average of $7 billion. Altogether, the amount of available operating cash in excess of this threshold totaled $1.9 trillion in fiscal year 2006, about three times the amount necessary to meet the minimum balance. When it set the current TT&L rate to 25 basis points below the federal funds rate in 1978, Treasury considered overnight repos to be an acceptable market-based comparison to TT&L deposits. The Federal Reserve conducts overnight repos with its primary broker-dealers. We estimate that if Treasury had invested this $1.9 trillion in a higher yielding investment earning the same rate as Federal Reserve repos, Treasury could have earned an additional $12.6 million in fiscal year 2006, or 5.4 percent of its return on available TT&L deposits. (See table 17.) To calculate this potential increase in gross return on Treasury’s short- term investments, we compiled publicly available data on short-term investments in fiscal year 2006 from Daily Treasury Statements (DTS) and the Federal Reserve. We calculated the daily balance invested in TT&L accounts, including Special Direct Investments (SDI), from DTS data as well as the effective TT&L rate. We also calculated the effective rate earned by the Federal Reserve on overnight repos for each available calendar day in 2006. On days where rate data were not available because an overnight repo was not in effect, we assumed a rate by averaging the first available rates before and after the missing rate. There were 276 calendar days in fiscal year 2006 where the daily TT&L Main Account balance exceeded $2 billion. For each day, we determined (1) what Treasury actually earned from the residual balance over $2 billion by multiplying the balance amount by the effective TT&L rate for that day, and (2) what Treasury could have earned from the residual balance by multiplying the balance amount by the actual or estimated Federal Reserve overnight repo rate. We then calculated the total dollar spread between these two returns for all 276 days. In addition to the contact named above, Jose Oyola (Assistant Director), Jessica Berkholtz, Amy Bowser, Tara Carter (Analyst-in-Charge), Richard Krashevski, Thomas McCabe, Matthew Mohning, Nicolus Paskiewicz, and Albert Sim made contributions to the report. Melissa Wolf, James McDermott, Dawn Simpson, and Dean Carpenter also provided key assistance. The repo trading arrangement in which the party and counterparty complete the clearing and settlement processes. Automatic deposits that occur when depositary institutions agree to accept direct deposits from the Department of the Treasury (Treasury) when Treasury cash receipts are greater than anticipated. Dynamic investments are made hourly throughout the day and are Treasury’s only option for placing late-day cash. The percentage that is subtracted from the market value of the collateral. The size of the haircut reflects the perceived risk associated with the pledged assets. The transfer of cash for a specified amount of time, typically overnight, in exchange for collateral. When the term of the repo is over, the transaction unwinds, and the collateral and cash are returned to their original owners, with a premium paid on the cash. An investment vehicle that provides Treasury additional Treasury Tax and Loan (TT&L) capacity when operating cash balances are unusually high. In an SDI, the depositary institution may use collateral retained on its premises in what is called an off-premises collateral arrangement. Acceptable collateral in the SDI program includes student loans, commercial loans, and one-to-four family mortgages, the last of which is only accepted in the SDI program. SDI balances earn the same rate of return as TT&L balances and may be withdrawn at any time by Treasury. Deposits in depositary institutions that allow Treasury to auction off portions of its excess cash at a competitive rate for a fixed number of days. Treasury’s bank account, through which most federal receipts and disbursements flow. It is maintained across the 12 Federal Reserve Banks and rolled into one account at the end of each business day. A collaboration between Treasury and over 9,000 commercial depositary institutions that collect tax payments. About 1,000 of these depositary institutions also hold funds and pay interest to Treasury. The repo trading arrangement in which an independent custodian bank acts as an intermediary between the two parties in the transaction and is responsible for clearing and settlement operations.
Growing debt and net interest costs are a result of persistent fiscal imbalances, which, if left unchecked, threaten to crowd out spending for other national priorities. The return on every federal dollar that the Department of the Treasury (Treasury) is able to invest represents an opportunity to reduce interest costs. This report (1) analyzes trends in Treasury's main receipts, expenditures, and cash balances, (2) describes Treasury's current investment strategy, and (3) identifies options for Treasury to consider for improving its return on short-term investments. GAO held interviews with Treasury officials and others and reviewed related documents. In managing the funds that flow through the federal government's account, Treasury frequently accumulates cash because of timing differences between when borrowing occurs, taxes are received, and agency payments are made. Treasury often receives large cash inflows in the middle of the month and makes large, regular payments in the beginning of the month. Treasury uses three short-term vehicles--Treasury Tax & Loan (TT&L) notes, Term Investment Option (TIO) offerings, and limited repurchase agreements (repo)--to invest operating cash. Before Treasury invests any portion of its operating cash balance, Treasury generally targets a $5 billion balance in its Treasury General Account (TGA) which is maintained across the 12 Federal Reserve Banks. The TT&L program provides Treasury with an effective system for collecting federal tax payments while assisting the Federal Reserve in executing monetary policy, but it subjects Treasury to concentration risk and earns a return well below the market rate. The TIO program earns a greater rate of return but it also subjects Treasury to concentration risk. Both programs also present capacity concerns. Treasury began testing repos through a pilot program in 2006. Repos have earned near market rates of return, but because of the pilot's scope and the current, limited legislative authority under which it operates, the repo participants, collateral, trading terms, and trading arrangements are restricted. A permanent, expanded repo program could permit Treasury to earn a higher rate of return, expand investment capacity, and reduce concentration risk. If given authority to design such a program, Treasury would need to tailor it to meet liquidity needs and to achieve a higher rate of return while minimizing risks that are associated with the selection of program participants, collateral types, terms of trade, and trading arrangements.
Under Medicaid managed care, states contract with health plans and prospectively pay the plans a fixed monthly rate per enrollee to provide or arrange for most health services. These contracts are known as “risk” contracts because plans assume the risk for the cost of providing covered services. States’ processes for developing rates may vary in a number of ways, including the type and time frames of data they use as the basis for setting rates, referred to as the base-year data, and what approach they use to negotiate rates with health plans. After rates are developed, an actuary certifies the rates as actuarially sound for a defined period of time, typically 1 year. In order to receive federal funds for its managed care program, a state is required to submit its rate-setting methodology and rates to CMS for review and approval. This review, completed by CMS regional office staff, is designed to ensure a state complies with federal regulatory requirements for setting actuarially sound rates. CMS published a final rule on June 14, 2002, outlining the agency’s regulatory requirements for actuarially sound rates. These requirements largely focus on the process states must use in setting rates. For example, the regulations require states to document their rate-setting methodology and include an actuarial certification of rates. In addition, the regulations include a requirement that when states use data from health plans as the basis for rates they must have plan executives certify the accuracy and completeness of their data. The regulations do not include standards for the type, amount, or age of the data that states mayuse in setting rates. The regulations also do not include standards for the reasonableness or adequacy of rates. In the preamble to the final rule, CMS noted that health plans were better able to determine the reasonableness and adequacy of rates when deciding whether to contr act with a state. In July 2003, CMS finalized a detailed checklist that regional office staff could use when reviewing states’ rate-setting submissions for compliance with the actuarial soundness requirements and that states and states’ actuaries could use when developing rates. The checklist includes citations to, and a description of, each regulatory requirement; guidance on what constitutes state compliance with the requirement; and spaces for the CMS official to check whether each requirement was met and cite evidence from the state’s submission for compliance with the requirement. The checklist also provides guidance on the level of review that should occur for different types of rate changes. When the state is developing a new rate, or using new actuarial techniques or data to change previously approved rates, the checklist indicates a full review should be done, which entails reviewing the state’s submission for compliance with all of the requirements covered in the checklist. For adjustments to rates that were previously approved as meeting the regulations, the checklist indicates a partial review should be done; a partial review focuses on a few key requirements in the checklist, such as ensuring that the state has included a certification of rates from a qualified actuary. As of June 2010, CMS was in the process of revising the checklist. One of the planned changes was to emphasize the need for more complete encounter data because CMS officials indicated that the agency has determined that encounter data that do not include pricing information are not sufficient for setting rates. CMS expects to complete the checklist revisions by November 2010. (See table 1 for a summary of the sections in CMS’s checklist.) According to CMS officials, the regional officials responsible for conducting rate-setting reviews may have a financial background, but are not actuaries. Officials also noted that CMS’s OACT, which provides actuarial advice to other offices within CMS, is generally not involved with Medicaid rate-setting reviews. However, they indicated that when the CMS officials responsible for rate-setting reviews have concerns with a state’s rate-setting methodology and cannot resolve those concerns with the state, they can contact OACT to request an independent review. CMS’s regulations require that actuarially sound rates be developed in accordance with generally accepted actuarial principles and practices. There is no Actuarial Standard of Practice (ASOP) that applies to actuarial work performed to comply with CMS’s regulations. However, in 2005, the American Academy of Actuaries published a practice note that provides nonbinding guidance on certifying Medicaid managed care rates. The practice note includes a proposed definition for “actuarial soundness,” as there was no other working definition of the term that would be relevant to the actuary’s role in certifying Medicaid managed care rates. Under the definition, rates are actuarially sound if, for the period of time covered by the certification, projected premiums provide for all “reasonable, appropriate, and attainable costs;” also under the definition, rates do not have to encompass all possible costs that any health plan might incur. The note emphasizes that the definition only applies to the certification of Medicaid managed care rates, and that it differs from the definition used when certifying a health plan’s rates. The practice note also provides information on the actuary’s role in assessing the quality of data used to set rates and refers the actuary to the ASOP on data quality for further guidance. The practice note explains that if the actuary is involved in developing the rate, then the actuary would consider all available data, including FFS data, Medicaid managed care encounter data, and Medicaid managed care financial reports and financial statements. The actuary would typically compare data sources for reasonableness and check for material differences when determining the preferred source or sources for the base-year data. The ASOP on data quality clarifies that while actuaries should generally review the data for reasonableness and consistency they are not required to audit the data. The ASOP also explains that the accuracy and completeness of the data are the responsibility of those that provided them, namely the state or health plans. CMS has been inconsistent in its review of states’ rate setting. In the six CMS regional offices we reviewed, CMS had not reviewed one state’s rate setting for compliance with the actuarial soundness requirements and had not conducted a full review for another. We also identified a number of other inconsistencies in CMS’s review of states’ compliance with the actuarial soundness requirements. Variation in CMS regional offices’ practices contributed to these inconsistencies in oversight. In the six CMS regional offices we reviewed, we found inconsistencies in CMS’s review of state’s rate setting, including significant gaps in the agency’s oversight of two states’ compliance with the actuarial soundness requirements. First, CMS had not reviewed one state’s (Tennessee) rate setting for compliance with the actuarial soundness requirements or approved the state’s rates. In 2007, Tennessee began transitioning its managed care program, which included all of the state’s approximately 1 million Medicaid enrollees, to risk contracts that were subject to the actuarial soundness requirements. Since moving to risk contracts, the state submitted at least two actuarial reports to CMS’s Atlanta regional office indicating the program change, but these documents did not trigger a CMS review. These reports did not include actuarial certifications, and Tennessee officials confirmed that the state’s rates had not been certified by an actuary, which is a regulatory requirement. As a result, according to CMS officials, Tennessee received, and is continuing to receive, approximately $5 billion a year in federal funds for rates that we determined had not been certified by an actuary or assessed by CMS for compliance with the requirements. Based on issues we raised during our review, CMS determined that Tennessee was not in compliance with the actuarial soundness requirements and, as of June 2010, was working to bring the state into compliance. Second, while CMS officials said that all states should have had a full review of rate setting after the actuarial soundness requirements became effective in August 2002, it appeared that CMS officials had not completed a full rate-setting review for Nebraska. CMS had no documentation of its last full review of Nebraska’s rate setting, but officials believed that the last full review was completed in 2002. According to Nebraska officials, the state last made significant changes to its rate setting for the state fiscal year beginning in 2001, which according to criteria in CMS’s checklist would have triggered a full CMS review. Based on what CMS and Nebraska officials told us, CMS’s last full review was likely done before the actuarial requirements became effective. As a result, Nebraska received federal funds for more than 7 years for rates that may not have been in compliance with all of the actuarial soundness requirements. In addition to these gaps in oversight, we found inconsistencies in the reviews CMS completed. In instances when CMS did a full rate-setting review, it was unclear whether CMS consistently ensured that states met all of the actuarial soundness requirements. We found evidence that the rates in all 28 of the CMS files we reviewed were certified by a member of the American Academy of Actuaries, as is required by the regulations. However, the extent to which CMS ensured state compliance with other aspects of the actuarial soundness requirements—such as the requirement that rates be based only on services covered under the state’s Medicaid plan or costs related to providing these services—was unclear. For example, in nearly a third of the files we reviewed, or 8 of 28 files, CMS officials did not use the rate-setting checklist to document their review; therefore we could not determine whether CMS ensured that states were in compliance with all of the requirements. In 17 of the 20 remaining files where the CMS official used the checklist, the official cited evidence of the state’s compliance for some requirements, but not others. When officials did cite evidence, the evidence did not always appear to meet the requirements. For example, one of the requirements in the regulations is that states provide an assurance that rates are based only on services covered under the state’s Medicaid plan or costs related to providing these services. Of the 19 files where CMS officials cited evidence of such an assurance, we were unable to locate the assurance in 2 of the files. Another requirement is that states include a comparison of expenditures under the previous year’s rates to those projected under the proposed rates. In the 15 files where CMS cited evidence of the comparison of expenditures, we did not find a comparison that appeared to meet the requirement in 2 of the files. See table 2 for more information on the extent to which evidence was cited in the CMS files we reviewed. Finally, CMS did not consistently review states’ rate setting for compliance with the actuarial soundness requirements prior to the new rates being implemented. In 20 of 28 files we reviewed, we found that CMS completed its review of rate setting after the state had begun implementing the proposed rates; that is, after the effective date of the proposed rates. CMS officials told us that a variety of factors could delay the approval of rates, including states submitting a request for approval after implementing the rates. CMS officials further explained that they did not consider a state to be out of compliance with the actuarial soundness requirements until the end of the federal fiscal year quarter in which the state implemented the unapproved rates. Of the 20 files where CMS approved rates after the state implemented them, 13 had rates that were approved more than 3 months after the state implemented the rates, which means that the rates were approved after the end of the quarter in which they were implemented. CMS officials confirmed that the agency generally continued to provide federal funds for the states’ managed care contracts even in cases where the rates were not approved by the end of the quarter. According to CMS officials, if the state failed to gain CMS approval or had to lower the rates to achieve approval, then CMS would reduce future federal reimbursement to account for federal funds paid to states for rates that had not been approved. However, CMS reviewing states’ rate setting after states have begun implementing rates may result in changes to states’ rate-setting methodology; this could lead to retroactive changes, including reductions, in health plans’ rates. The possibility of rates being decreased retroactively may make it difficult for health plans to assess the reasonableness and adequacy of rates when contracting with states, an assessment that CMS relies on as a check of states’ rate setting. Variation in a number of regional office practices contributed to the inconsistency in CMS’s oversight of states’ rate setting. Regional offices varied in the extent to which they tracked state compliance with the requirements, the extent to which they withheld federal funds, their criteria for doing full and partial reviews of rate setting, and what they considered to be sufficient evidence for meeting the requirements. Tracking compliance. Officials from all of the regional offices we spoke with told us that they tracked basic information regarding the status of the CMS review process, such as when a state’s submission was received and when CMS’s approval letter was issued. However, based on our interviews with CMS regional officials, we found that four of the six regional offices did not track information that would allow them to identify states that were not in compliance with actuarial soundness requirements, such as the beginning and end dates of the rates specified by the actuary in the certification. Officials from the remaining two regional offices, Kansas City and San Francisco, told us they tracked the effective dates of approved rates. Withholding funds. There was also variation among regional offices in the conditions that had to be met in order for states to receive federal funds. For example, officials from the San Francisco regional office told us that they did not release federal funds to states until the states’ managed care contract and rates had been approved. Officials said that the office had withheld funds in several cases until the state demonstrated compliance with the requirements. For example, from October 2008 through April 2010, the San Francisco regional office reported withholding a total of $302.7 million in federal funding for Hawaii because the state’s contracts and rates did not meet the actuarial soundness requirements. In contrast, officials we interviewed from the Atlanta regional office said that the office would release federal funds to a state even if the state’s rates had not yet been approved by CMS. Criteria for full and partial reviews. CMS regional officials had different interpretations of when full versus partial reviews of rate setting were necessary. For example, officials from the New York regional office told us that they completed a full review for each rate-setting submission received, regardless of the changes made to rates or rate setting. In contrast, a Kansas City regional office official told us that she completed a partial review in cases where the state adjusted the rates but had not changed the data used as the basis for rates. Sufficient evidence for compliance. Regional office officials varied in how they determined sufficient evidence for state compliance with certain requirements. For example, for the requirement that rates are for Medicaid-eligible individuals covered under the contract, officials from the San Francisco regional office told us that, while they had verified information provided by states on the populations covered under the rates, they mainly looked for an assurance from the state that rates were for eligible populations. In contrast, a Kansas City regional office official explained that an assurance from the state alone would not be sufficient. Rather, the official would require evidence of the eligible populations included in, and excluded from, the rate-setting methodology. Other variations. Variations in other regional office practices may also have contributed to the inconsistency in CMS oversight. For example, management oversight of rate-setting reviews in regional offices varied. A Kansas City regional official who reviews states’ rate setting told us that, prior to approving states’ rates, she submitted memoranda outlining the impact of states’ proposed rate changes and the rationale for recommending approval of the package to her regional office managers. In contrast, officials from the New York regional office told us that most officials responsible for reviewing and approving states’ rate setting worked independently and managers did not review a completed checklist. Other variations in practices that may have had an effect on CMS oversight included differences in training and standard procedures for conducting and documenting reviews. As a result of our review, CMS took a number of steps that may address some of the variation in regional office practices. For example: officials from two regional offices told us that their offices were implementing new standard procedures to address inconsistencies in reviews identified through the course of GAO’s work; and in December 2009, CMS began requiring that regional offices use the checklist in reviewing all states’ rate-setting submissions and assure central office of its use before approving a state’s rates. However, as we reported above, variations existed even when the checklist was used, such as in the extent to which CMS officials using the checklist cited evidence of compliance for each of the actuarial soundness requirements. CMS’s efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans, which did not provide the agency with sufficient information to ensure data quality. CMS regulations require states to describe the data used as the basis for rates and provide assurances from their actuaries that the data were appropriate for rate setting. The regulations also specify that states using data submitted by the health plans as the basis for rates must require executives from the health plans to attest that the data are accurate, complete, and truthful. The regulations do not include requirements for the type, amount, or age of data or standards for the reasonableness or adequacy of rates. Additionally, CMS does not require states to submit documentation about the quality of the data used to set rates. In our interviews with regional office officials, we found that, when reviewing states’ descriptions of the data used for rate setting, CMS officials focused primarily on ensuring the appropriateness of the data used by states to set rates rather than their reliability. This included reviewing the specific services and populations included in the base-year data or checking for assurances of appropriateness from the states’ actuaries. CMS officials noted that if they had concerns with the quality of a state’s data they would ask the state questions. None of the officials, however, reported taking any action beyond asking questions. With limited information on the quality of data used to set rates, CMS cannot ensure that states’ managed care rates are appropriate and risks misspending billions of federal and state dollars. Actuarial certification does not ensure that the data used to set rates are reliable. In particular, 9 of the 28 files we reviewed included a disclaimer in the actuary’s certification that if the data used were incomplete or inaccurate then the rates would need to be revised. Additionally, in more than half of the 28 files we reviewed, the actuaries noted that they did not audit or independently verify the data and relied on the state or health plans to ensure that the data were accurate and complete. Officials from three of the five health plans we spoke with raised concerns about the completeness of the encounter data used by states to set rates. Additionally, state auditors in Washington have raised concerns about the lack of monitoring of the accuracy of data used for rate setting. The auditors found that the state did not verify the accuracy of the data used as the basis for Medicaid managed care rates in fiscal years 2003 through 2007. The state auditor’s report from fiscal year 2007 concluded that the risk of paying health plans inflated rates increased when the accuracy of data used to establish rates could not be reasonably assumed to be correct. States have information on the quality of data used for rate setting— information that CMS could obtain. State officials we spoke with reported having information on, and efforts intended to ensure, the quality of the data used to set rates. For example, New Jersey officials told us that the state tested the reliability and accuracy of the health plan financial data used to set rates against encounter data and required health plans to have an independent auditor review selected portions of the financial data. Additionally, Arizona officials indicated that the state periodically completes validation studies of the state’s encounter data in which they traced a sample of the encounters back to individuals’ medical records. State officials indicated that CMS used to require the state to submit results of these studies as a condition of operating its managed care program. However, given the state’s extensive experience with managed care, CMS no longer requires the state to submit these studies for all participating health plans. (See app. III for a summary of selected states’ efforts intended to ensure data quality.) Without requiring and reviewing information on states’ data quality efforts, CMS cannot ensure that these data are of sufficient quality to be used for setting rates. In addition to information from states, CMS conducts audits that could have provided CMS officials relevant information about the quality of the data used to set rates. For example, when describing the state’s efforts to ensure the quality of data used to set rates, officials from South Carolina noted that CMS periodically reviews the state’s FFS data through the Payment Error Rate Measurement (PERM) program. Error rates calculated using FFS and encounter data through the PERM program could provide CMS with insights regarding the quality of the data that some states use to set rates. In CMS’s rate-setting review file for South Carolina, however, there was no discussion of PERM results by either the state or CMS. CMS central office officials confirmed that regional office staff do not consider the results of data studies, such as state validation or PERM program reports, when reviewing states’ rate-setting submissions. CMS also could have conducted or required periodic audits of the data used to set rates. In Medicare Advantage, which is Medicare’s managed care program, CMS is required to conduct annual audits of the financial records of at least one-third of the organizations participating in the program. For Medicaid, however, CMS had not conducted any recent audits or studies of states’ rate setting, including the quality of data used. Specifically, officials in all six of the regional offices we spoke with told us that they had not performed any audits or special studies of states’ rate setting. Officials from CMS’s central office were also not aware of any recent audits or studies done by the four other regional offices. In addition, officials from CMS’s central office told us that they could only recall one instance, in the nearly 8 years since the regulations were issued, where OACT arranged for an independent assessment of a state’s rate setting; that assessment was done more than 2 years ago. The statutory and regulatory requirements for actuarially sound rates are key safeguards in efforts to ensure that federal spending for Medicaid managed care programs is appropriate, which could help avoid significant overpayments and reduce incentives to underserve or deny enrollees’ access to needed care. CMS, however, has been inconsistent in ensuring that states are complying with the actuarial soundness requirements and does not have sufficient efforts in place to ensure that states are using reliable data to set rates. During the course of our work, CMS took steps to address some of the variation in regional office practices that contributed to inconsistencies in overseeing state compliance, such as requiring regional office officials to use the checklist in reviewing all states’ rate- setting submissions. While these are positive steps, they do not address all of the variations in regional office practices that contributed to inconsistencies in CMS’s oversight of rate setting. For example, these steps do not address variations in tracking state compliance, which may have led to CMS’s failure to review Tennessee’s rates for compliance with the actuarial soundness requirements. Additionally, the steps taken do not address the variation in what evidence CMS officials considered sufficient for compliance, how officials used the checklist to document their reviews, and what conditions were necessary for federal funds to be released. CMS also does not have sufficient efforts in place to ensure the quality of the data states used to set rates, relying on assurances from states without considering any other available information on the quality of the data used. By relying on assurances alone, the agency risks reimbursing states for rates that may be inflated or inadequate. As a result of the weaknesses in CMS’s oversight, billions of dollars in federal funds were paid to one state for rates that were not certified by an actuary, and billions more may be at risk of being paid to other states for rates that are not in compliance with the actuarial soundness requirements or are based on inappropriate and unreliable data. Given the complexity of overseeing states’ unique and varied Medicaid programs, it is appropriate that CMS would allow for flexibility in states’ rate setting and would expect states to have the primary responsibility for ensuring the quality of the data used to set rates. However, CMS needs to ensure that all states’ rate setting complies with all of the actuarial soundness requirements and needs to have safeguards in place to ensure that states’ data quality efforts are sufficient. Improvements to CMS’s oversight of states’ rate setting will become increasingly important as coverage under Medicaid expands to new populations for which states may not have experience serving, and may have no data on which to base rates. To improve oversight of states’ Medicaid managed care rate setting, we recommend that the Administrator of CMS take three actions. To improve consistency in the oversight of states’ compliance with the Medicaid managed care actuarial soundness requirements, we recommend that the Administrator of CMS: implement a mechanism for tracking state compliance, including tracking the effective dates of approved rates; and clarify guidance for CMS officials on conducting rate-setting reviews. Areas for clarification could include identifying what evidence is sufficient to demonstrate state compliance with the requirements, the conditions necessary for federal funds to be released, and how officials should document their reviews. To better ensure the quality of the data states use in setting Medicaid managed care rates, we recommend that the Administrator of CMS make use of information on data quality in overseeing states’ rate setting. CMS could, among other things, require states to provide CMS with a description of the actions taken to ensure the quality of the data used in setting rates and the results of those actions; consider relevant audits and studies of data quality done by others when reviewing rate setting; and conduct or require periodic audits or studies of the data states use to set rates. We provided a draft of this report to HHS for its review and comment. HHS concurred with all three of our recommendations, and commented that it appreciated our efforts to highlight improvements that CMS can make in its oversight of states’ compliance with Medicaid managed care actuarial soundness requirements, as well as its focus on the quality of data used to set managed care rates. Moreover, HHS noted that CMS has identified many of the same issues. (See app. IV for a copy of HHS’s comments.) HHS agreed with our two recommendations related to improving the consistency of CMS’s oversight, namely that CMS implement a mechanism for tracking state compliance with the actuarial soundness requirements and clarify guidance for CMS officials on conducting rate-setting reviews. HHS noted that CMS has established a managed care oversight team to develop and implement a number of improvements in its managed care oversight, some of which will address our recommendations. These improvements included CMS’s plans to develop standard operating protocols for the review and approval of Medicaid managed care rates and provide comprehensive training to CMS staff on all aspects of the new process and requirements. As CMS implements efforts aimed at improving its oversight, we reiterate the need to implement a mechanism for tracking state compliance with actuarial soundness requirements, including the effective dates of rates. HHS also agreed with our recommendation that CMS make use of information on data quality in overseeing states’ rate setting. In commenting on our finding related to CMS’s limited efforts to ensure data quality, HHS noted that a number of requirements within PPACA will give CMS additional authority and responsibility for acquiring and utilizing Medicaid program data. In response to our recommendation, HHS noted that, as part of a broader effort to redesign how it collects Medicaid data, CMS will be setting standards for the type and frequency of managed care data submissions by states. HHS commented that with more complete data at its disposal, CMS will be able to better assess the underlying quality of data submissions and, thus, better execute its oversight and monitoring responsibilities. CMS should use these assessments and other available information when overseeing states’ rate setting. Finally, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess the Centers for Medicare & Medicaid Services’s (CMS) oversight of states’ compliance with the Medicaid managed care actuarial soundness requirements, we conducted a structured review of CMS files from 6 of the 10 CMS regional offices. We selected CMS regional offices that: represented at least 5 of the 10 CMS regional offices, collectively had oversight responsibility for at least 65 percent of the 34 states with comprehensive Medicaid managed care programs, and were geographically diverse and oversaw states with Medicaid managed care programs ranging in size. The six regional offices that we selected for our review had oversight responsibility for 26 of the 34 states (or 76 percent) with comprehensive Medicaid managed care programs. According to information from CMS, these 26 states accounted for about 85 percent of Medicaid managed care enrollment nationally in 2008 and state program size ranged from 8 percent of Medicaid enrollees in Illinois to 100 percent in Tennessee. (See table 3.) We conducted a structured review of a selection of files from the six CMS regional offices. Specifically, we reviewed the files for CMS’s rate-setting reviews of the most recently approved contract for each state’s comprehensive managed care program, or, for states with multiyear contracts, the file for the most recent full review of rate setting completed as of October 31, 2009. Several states in the selected regions had multiple comprehensive managed care programs that had separate contracts and rate-setting processes each subject to CMS review and approval. For states that had two programs, we selected the file for the program CMS officials indicated was the largest, as defined by the number of enrollees and estimated expenditures. For the states that had more than two programs, we selected the files for the two largest programs. For 2 of the 26 states overseen by the six regional offices (Nebraska and Tennessee), CMS had not done a review that met our criteria, so we did not review a file for those states. In total, we reviewed 28 files, which covered 24 states, 4 of which had two or more programs for which CMS did separate reviews. (See table 4.) As part of our file review, we assessed the degree to which CMS documented its review. Specifically, we determined whether the CMS official completed CMS’s checklist—a tool CMS developed for regional office staff to use when reviewing states’ rate-setting submissions for compliance with the actuarial soundness requirements. For those files where the CMS official did not complete the checklist and provided no other documentation of the review, we did no further assessment of CMS’s review. For the files where the CMS official completed the checklist, we assessed the extent to which CMS ensured that the state complied with the actuarial soundness requirements. To do this, we identified several requirements of the regulations, including that rates were certified by a qualified actuary, that rates were based on covered services for eligible individuals, and that the state documented any adjustments to the base year data. For these requirements, we assessed whether (1) CMS documented that the state met the requirement, (2) CMS cited evidence for the assessment that the state was in compliance, and (3) the cited evidence was consistent with the guidance in CMS’s checklist. Additionally, as part of our review, we summarized descriptive elements of states’ rate setting and rates. For example, we documented the types of data used as the basis for rates and how the state’s rates changed from the prior year. To ensure the accuracy of the information collected as part of our structured review of the files, we conducted independent verifications of each review. To describe state views of the Centers for Medicare & Medicaid Services’s (CMS) oversight of state compliance with the Medicaid managed care actuarial soundness requirements and state efforts to ensure the quality of the data used to set rates, we selected 11 of the 34 states with comprehensive Medicaid managed care programs and interviewed officials from those states’ programs. We selected states that: varied in the size of their Medicaid managed care programs, as defined by the numbers of managed care enrollees, the proportion of states’ Medicaid population that were in managed care, and the number of MCOs participating in the program; and overlapped with the oversight responsibilities of the six selected CMS regional offices. Table 5 provides information about the selected states. The 11 states we interviewed used a combination of approaches intended to ensure the quality of the data used in Medicaid managed care rate setting. These included front-end efforts intended to prevent errors in data reported by providers and health plans, reconciliation methods to help ensure the reliability and appropriateness of reported data, and in-depth reviews that identified and addressed issues of ongoing concern. See table 6 for a summary of the selected states’ efforts intended to ensure data quality. In addition to the contact named above, Michelle Rosenberg, Assistant Director; Joseph Applebaum, Chief Actuary; Susan Barnidge; William A. Crafton; Drew Long; Kevin Milne; and Dawn D. Nelson made key contributions to this report.
Medicaid managed care rates are required to be actuarially sound. A state is required to submit its rate-setting methodology, including a description of the data used, to the Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) for approval. The Children's Health Insurance Program Reauthorization Act of 2009 required GAO to examine the extent to which states' rates are actuarially sound. GAO assessed CMS oversight of states' compliance with the actuarial soundness requirements and efforts to ensure the quality of data used to set rates. GAO reviewed documents, including rate-setting review files, from 6 of CMS's 10 regional offices. The selected offices oversaw 26 of the 34 states with comprehensive managed care programs; the states' programs varied in size and accounted for over 85 percent of managed care enrollment. GAO interviewed CMS officials and Medicaid officials from 11 states that were chosen based in part on variation in program size and geography. CMS has been inconsistent in reviewing states' rate setting for compliance with the Medicaid managed care actuarial soundness requirements, which specify that rates must be developed in accordance with actuarial principles, appropriate for the population and services, and certified by actuaries. Variation in CMS regional office practices contributed to this inconsistency in oversight. For example, GAO found significant gaps in CMS's oversight of two states. 1) First, the agency had not reviewed Tennessee's rate setting for multiple years and only determined that the state was not in compliance with the requirements through the course of GAO's work. According to CMS officials, Tennessee received approximately $5 billion a year in federal funds for rates that GAO determined had not been certified by an actuary, which is a regulatory requirement. 2) Second, CMS had not completed a full review of Nebraska's rate setting since the actuarial soundness requirements became effective, and therefore may have provided federal funds for rates that were not in compliance with all of the requirements. Variation in a number of CMS regional office practices contributed to these gaps and other inconsistencies in the agency's oversight of states' rate setting. For example, regional offices varied in the extent to which they tracked state compliance with the actuarial soundness requirements, their interpretations of how extensive a review of a state's rate setting was needed, and their determinations regarding sufficient evidence for meeting the actuarial soundness requirements. As a result of our review, CMS took a number of steps that may address some of the variation that contributed to inconsistent oversight, such as requiring regional office officials to use a detailed checklist when reviewing states' rate setting. However, additional steps are necessary to prevent further gaps in oversight and additional federal funds from being paid for rates that are not in compliance with the actuarial soundness requirements. CMS's efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans--efforts that did not provide the agency with enough information to ensure the quality of the data used. CMS's regulations do not include standards for the type, amount, or age of the data used to set rates, and states are not required to report to CMS on the quality of the data. When reviewing states' descriptions of the data used to set rates, CMS officials focused primarily on the appropriateness of the data rather than their reliability. With limited information on data quality, CMS cannot ensure that states' managed care rates are appropriate, which places billions of federal and state dollars at risk for misspending. States and other sources have information on the quality of data used for rate setting--information that CMS could obtain. In addition, CMS could conduct or require periodic audits of data used to set rates; CMS is required to conduct such audits for the Medicare managed care program. GAO recommends that CMS implement a mechanism to track state compliance with the requirements, clarify guidance on rate-setting reviews, and make use of information on data quality in overseeing states' rate setting. HHS agreed with our recommendations and described initiatives underway that are aimed at improving CMS's oversight.
The service academies are one of the main sources of newly commissioned officers. Over the last 20 years, the academies have provided about 10 percent of annual new officer accessions, with the bulk of the remainder coming from the Reserve Officers Training Corps and officer candidate schools. Each of the academies operates adjudicatory systems to provide students with training and maintain discipline and standards. The conduct system at each academy establishes rules and regulations and provides an administrative process for dealing with those accused of violating them. In addition, each of the academies has a largely student-run honor system that prohibits lying, cheating, and stealing. Although each institution’s processes are somewhat unique, students accused of honor or conduct violations at the various academies experience generally similar investigative and separation procedures. The honor and conduct adjudicatory systems at each academy are considered by the academies to be administrative systems. That is, they are intended primarily as an aid in maintaining discipline and order. As such, they are nonjudicial in character. The U.S. Constitution, through the President, gives a commanding officer executive authority (the right to lead). The Congress, through the Uniform Code of Military Justice (UCMJ), provides commanders with quasi-judicial responsibility when they act in an administrative (nonjudicial) punishment capacity, and judicial authority when they act as a court-martial convening authority. Academy students are expected to adhere to civilian laws, UCMJ, and service and academy directives and standards. Unless excluded by statute, all statutory provisions applicable to military members are also applicable to cadets. Article 2 of the UCMJ specifically cites “cadets, aviation cadets, and midshipmen” as being subject to UCMJ. The superintendent of each academy has also been designated as a general court-martial convening authority. Conduct violations are grouped into categories, depending upon the seriousness of the offense. For minor offenses, adjudication and punishment are determined by a member of the student or officer chains of command. Students who violate more serious disciplinary standards are subject to administrative disciplinary hearings or court-martial for serious violations of UCMJ. Punishments range from demerits to expulsion and include a wide range of intermediate sanctions. Each of the academies also has a largely student-run honor system that is intended to set the standard for moral behavior of the cadets and midshipmen with the ultimate objective of building the trust and integrity necessary for military teams to work effectively. At each academy, a committee of cadets is elected annually by the student body to administer the honor system. This group also provides members to sit on student honor boards. All accused honor violators are provided certain due process rights in the adjudication of their cases, and potential punishment depends on the circumstances of each case. Under the honor systems, anyone may report a cadet/midshipman for a suspected honor violation, including the individual himself/herself. When a possible honor violation is reported, a student investigator or investigative team is appointed. If the investigation finds sufficient evidence that an honor violation has occurred, a formal honor hearing is convened. If the honor board finds an individual guilty, the case file is routed to the Commandant and the Superintendent who review the evidence and decide upon punishment. The service secretary is the approval authority for expulsions. The honor systems at the academies consist of more than the honor codes and the processes established for investigating and adjudicating alleged violations. A key part of the honor systems involves the academies’ efforts to inculcate their students with a high standard of ethics and integrity. The honor education program at the Military Academy at West Point, New York, is a continuous, progressive, 4-year program. The overall goal is to foster an internal commitment to ethical standards that is beyond reproach. The honor education program includes 50 hours of instruction, 12 of which take place during cadet basic training, 35 during the academic year, and 3 during cadet field training. The focus of honor education changes as cadets progress through their academy careers. Fourth class honor instruction is intended to give new cadets an appreciation and understanding of the tenets of the honor code and its application to the cadets, both at the academy and while away from the academy. Third class instruction focuses on developing an understanding of the significance of being honorable as a leader of subordinates. Second class honor instruction focuses on the transition from honorable living as a cadet to honorable living as an officer. First class year is a time for reflection and coming to terms with the responsibilities of the office that cadets will enter at graduation. In addition, “X-Y letters,” which are descriptions of actual honor cases and their resolutions, are distributed to cadets. Honor education at the Naval Academy in Annapolis, Maryland, is in the process of being revised and unified under a new character development program. The character development officer oversees this program and is directly responsible to the Superintendent for educating, training, and providing feedback to students and staff regarding the honor concept. While midshipmen have always received honor training during each of their 4 years at the academy, the curriculum has been largely repetitive from year to year. A group of faculty, administrators, and athletic coaches is currently rewriting the curriculum, which is expected to be implemented during the 1994-95 school year. The program is expected to include 12 hours of instruction per year. In addition to formal instruction, midshipmen receive periodic updates on honor from the Ethics Advisor and “XYZ” letters. These letters are descriptions of actual honor cases with explanations of the outcomes and generalized advice for midshipmen who may be facing similar ethical dilemmas. Honor education at the Air Force Academy in Colorado Springs, Colorado, is part of a comprehensive, 4-year character development program. The overall goal of honor education is to introduce cadets to the four tenets of the honor code as a minimum standard for their conduct. The honor education program includes 61 hours of instruction, 18 of which occur during basic cadet training and 43 of which take place during the academic year. The honor education program uses a variety of approaches, including lectures, speeches, skits, film clips, case studies, scenarios, and experiential activities. In addition, cadets receive “Cadet X” letters to keep them informed of current honor case proceedings and to explain the outcomes of cases. The Congress has long been interested in the academies’ adjudicatory systems. As those who appoint students to attend the academies, Members of Congress are concerned that the students are treated fairly. In addition, congressional attention has been drawn to the honor systems, in particular, due to periodic episodes of large-scale honor violations. During hearings on the academies, Members of Congress have periodically raised questions about the honor systems because of their observations of legally or ethically questionable behavior (such as falsified body counts, inflated readiness reports, and coverups of illegal or embarrassing acts) by military officers. Each of the academies has experienced large-scale cheating episodes. The most recent mass cheating scandal occurred at the Naval Academy in 1993, in which 88 midshipmen were found guilty of honor violations for cheating on an electrical engineering exam. In 1974, seven midshipmen were forced to resign for cheating on a celestial navigation exam after an instructor allowed several midshipmen to examine a copy of the test during a review session and they then shared the information with others. At the Military Academy, 90 cadets were forced out for cheating on examinations in 1951, 42 cadets left after being accused of cheating in 1966, 21 cadets were dismissed for cheating and condoning cheating in 1973, and 134 cadets left for cheating or tolerating cheating on a take-home computer project in 1976. At the Air Force Academy, 109 cadets left in 1965 for stealing and selling exams or tolerating the practice, 46 cadets left in 1967 after sharing test questions, 39 cadets were separated for cheating and tolerating those who did in 1972, 6 cadets resigned after being found to have collaborated on a physics lab exercise in 1976, and 4 cadets left the academy as a result of an economics class honor incident in 1992. Episodes such as these have triggered extensive congressional hearings such as those convened in the House of Representatives in 1967-68 and the Senate in 1976 and 1994. But congressional interest in the academies’ honor systems has not been confined solely to the mass cheating episodes. Another concern has been the academies’ effectiveness at inculcating new officers with a sense of honor and ethics. For example, the Senate Committee on Armed Services became concerned about the amount of ethics-based coursework at the academies because the principal people convicted by juries in the Iran-Contra scandal were all academy graduates. This concern prompted the Committee to ask the Secretary of Defense to report on how the academies were implementing the Committee’s recommendation that they incorporate into their curricula topics such as the constitutional limits on military authority, civilian/military relations, the proper response to illegal orders, and the misuse of power to further personal goals. A primary objective of adjudicatory systems, from the point of view of those subject to the systems, is “fairness.” To try to ensure fairness, adjudicatory systems are typically designed in ways that minimize or structure the discretion of the adjudicator(s) by imposing standardized procedures and mandating certain protections for the accused. The categories we used in this report to describe and compare the various adjudicatory processes are derived from the legal concept of “procedural due process,” which refers to safeguards incorporated into adjudicatory proceedings. The concept of due process is embodied in the 5th Amendment of the U.S. Constitution, which provides that no person shall “be deprived of life, liberty, or property, without due process of law.” The concept of procedural due process implies that official governmental action must meet minimum standards of fairness and justice. Since the courts view due process as a concept that should be flexibly applied to fit the needs of a particular context, a body of case law has developed regarding the applicability of procedural due process protections to specific subgroups and particular settings. Due process protections are greater in criminal proceedings than in non-criminal proceedings (such as administrative hearings). Courts have established that students facing expulsion from tax-supported colleges and universities have constitutionally protected interests that require minimal due process protections and established standards for student disciplinary proceedings. While these standards and guidelines have been used in devising due process requirements for academy adjudicatory proceedings, courts have ruled that the government’s interest in assuring the fitness of future military officers permits the academies greater freedom in providing due process protections than is accorded civilian institutions or authorities. We believe the due process protections and limitations applicable to academy adjudicatory proceedings can be best understood by comparing them with the broadest range of due process protections available in civilian proceedings. In reviewing judicial and administrative proceedings, we identified 12 categories of due process protections commonly used to ensure fairness in hearings. These categories are used in this report to discuss the academy adjudicatory systems and include the rights to • adequate notice, • an open hearing, • an impartial tribunal, • present argument, • present and cross-examine witnesses, • know opposing evidence, • be represented by counsel, • have the decision based solely on the evidence presented, • have a complete record of the proceeding including findings of fact and reasons for the decision, • an independent appellate review, • remain silent, and • have involuntary confessions excluded. These 12 categories of due process rights include several rights derived from criminal hearings. However, their inclusion does not mean we believe that all these rights should be provided in academy adjudicatory systems. Our purpose is to lay out as complete a set of due process protections as possible to facilitate a comprehensive discussion and comparison of the various adjudicatory systems. The academies classify their honor and conduct systems as administrative, as opposed to judicial, processes. Over the last 25 years, a number of cadets and midshipmen separated by the academies for honor or conduct offenses have appealed to the federal courts for relief. The courts have generally found that the academies’ adjudicatory systems provide students with the due process protections required by existing law for administrative systems. The former Chairman of the Senate Committee on Armed Services and the former Chairman of its Subcommittee on Manpower and Personnel asked us to review various aspects of student treatment, including the adjudicatory systems, at the three Department of Defense (DOD) service academies. The objectives of this report are to (1) compare the characteristics of the honor and conduct systems at each academy and describe how the various systems provide common due process protections from the perspective of key participants in the process and (2) describe the attitudes and perceptions of the students toward the honor and conduct systems. A separate report describes the operation of the academic adjudicatory processes at each academy. We reviewed academy rules and regulations, historical accounts of the academies, studies and reviews related to the operation of the honor and conduct systems, and files and case law on disciplinary and honor cases. We interviewed academy officials, staff, students, and the academy-provided attorneys at each academy who served as legal advisors to students accused of misconduct or honor offenses. We provided DOD with a draft of this report and its comments appear in appendix I. In addition, we administered questionnaires at each of the three academies to samples of cadets and midshipmen in 1990-91 and again in 1994. We found little difference between the responses from these two periods and, therefore, we present only the 1994 data. A detailed description of the surveys and related methodological issues appears in appendix II. We performed our review at the Military Academy, the Naval Academy, and the Air Force Academy from October 1993 to January 1995 in accordance with generally accepted government auditing standards. The Military Academy, the Naval Academy, and the Air Force Academy operate under somewhat similar honor code adjudicatory systems. While the honor systems at each academy share many similarities, there are also some key differences. Each system provides students with certain common due process protections, while not providing or limiting various other protections. The honor systems are strongly embedded in the history and traditions of the academies. The exact wording of the honor code or concept is somewhat different at each academy. The Military Academy honor code states “a cadet will not lie, cheat, or steal, nor tolerate those who do.” This honor code can be traced to the officer “code of honor” of the late 1700s and has existed in one form or another since the Academy was established in 1802. However, there was no formal honor system at that time and points of honor were generally settled on a personal basis with the offended party “calling out” the offender. The issue was then settled in some sort of a duel, usually a fistfight. Formalization of the honor system started to evolve in the late 1800s when cadets began organizing “vigilance committees.” The vigilance committee investigated possible honor violations and reported its findings to the cadet chain of command. If a cadet was found guilty, he would be pressured to resign. Although these committees were not officially recognized by Academy authorities, their existence was tolerated and their decisions unofficially sanctioned. In 1922, during the administration of Brigadier General Douglas MacArthur as Superintendent, a formal student honor committee was established, and it codified the existing unwritten rules. The content of the Military Academy’s honor code has evolved over the years, going through numerous changes in statement, interpretation, and application. The original code dealt only with lying. Later, cheating was added during Sylvanus Thayer’s term as Superintendent (1817-33), although the code reverted back to dealing only with lying by 1905. The prohibition against stealing was originally only a matter of regulations. At some point in the mid-1920s, stealing became part of the honor code, although serious cases were still referred for court-martial. In 1970, the honor code was changed to its current form to add an explicit “non-toleration” clause. For over a century since its establishment in 1845, the Naval Academy had no official, formalized honor system. Although midshipmen were presumed to be inherently honorable, it was not until 1865 that they were first placed on their honor regarding not violating liberty limits. By the end of the 1800s, the meaning of honor had changed to a code of not reporting fellow classmates for any offense. By the early 1900s, an informal honor code had evolved, and a fistfight would ensue if one’s integrity were questioned. When a 1905 fight resulted in the death of a midshipman, President Theodore Roosevelt ordered that the honor code be abolished. Honor standards were then incorporated into the midshipman regulations and violations were processed as serious conduct offenses. “. . . did not want a system that would codify right and wrong, or a system that over the years would become so involved with loopholes and elastic clauses that soon its very principles would degenerate into a set of rights and wrongs that would enable and tempt midshipmen to do wrong yet still be within the codified system’s bounds of right.” “The honor concept is not a code of specific requirements or prohibitions, but is violated by the commission or omission of any act contrary to those principles, provided the commission or omission was done with the intent to breach the fundamental concept.” The 1994 Naval Academy honor concept states, “Midshipmen are persons of integrity: They stand for that which is right.” Prior to acceptance into the Cadet Wing, all Air Force Academy cadets take the Honor Oath, which states, “We will not lie, steal, or cheat, nor tolerate among us anyone who does. Furthermore, I resolve to do my duty and to live honorably, so help me God.” The Air Force Academy has had an honor code since its inception. A 1954 study group, headed by General Hubert R. Harmon, examined the honor codes and systems in use by military and civilian institutions throughout the country. From that review, the study group proposed a basic code and system that borrowed heavily from the system being used at the Military Academy. This basic code and system were presented to the Cadet Wing on a trial basis in 1955, and the Class of 1959, the first class to enter the Academy, adopted this code as the minimum standard for all cadets in September 1956. The number of honor cases varies considerably from year to year and from one academy to another. In addition, the proportion of cases that are dropped without going to a board, the conviction rates, and the proportion of convicted students who are expelled also tend to vary. The Military Academy had 84 honor cases in academic year 1993-94, 141 cases in academic year 1992-93, and 115 cases in academic year 1991-92. Fifty-nine percent of these cases were dropped without going to an honor board. Of the 139 cases that went to a board, about half of the cadets were found guilty. During this 3-year period, 20 cadets (about 28 percent of those found guilty) were separated for honor violations. The Naval Academy had 80 honor cases in academic year 1993-94, 118 cases (excluding the electrical engineering exam incident for which the statistics are shown separately) in academic year 1992-93, and 100 cases in academic year 1991-92. Fifty percent of these cases were dropped without going to an honor board. Of the 149 cases that went to a board, a little over half of the midshipmen were found guilty. During this 3-year period, 16 midshipmen (about 20 percent of those found guilty) were separated for honor violations. The electrical engineering exam incident originally entailed charges against 28 midshipmen, with 4 cases being dropped without a board. Of the 24 cases that went to honor boards, 11 midshipmen were convicted. Five of the convictions were overturned on review by Academy officials and three midshipmen were separated. When the extent of the cheating was determined to involve much higher numbers of midshipmen than were initially charged, the Navy established a special board made up of three admirals to adjudicate the cases. This board heard a total of 129 cases (including most of the cases that were previously heard by the midshipman honor boards) and found 88 midshipmen (68 percent) guilty. Twenty-six midshipmen (30 percent) were separated. The Air Force Academy had 231 honor cases in academic year 1993-94, 164 cases in academic year 1992-93, and 154 cases in academic year 1991-92. Twenty-four percent of these cases were dropped without going to an honor board. Of the 371 cases that went to a board, 236 cadets (about 64 percent) were found guilty. During this 3-year period, 18 cadets (about 8 percent of those found guilty) were separated or resigned for honor violations. The main differences among the honor systems at the three academies are summarized in table 2.1. The honor codes of the Military and Air Force academies have an explicit non-toleration clause. That is, they both include language that makes it an honor offense to allow an honor violation to go unreported. The Naval Academy’s honor concept does not have such a clause. While the honor concept of the Naval Academy does not include such a clause, midshipmen are not free to ignore honor violations. The Academy’s honor instruction requires that anyone learning of what may be a violation of the honor concept must take one of four options. The options are (1) immediately report the evidence to the Brigade Honor Committee or discuss the incident with the suspected offender and then, (2) report the offender, (3) formally counsel the offender, or (4) take no further action if it appears that no violation was committed. In 1994, the Academy began requiring that a formal counseling sheet be turned in to the Brigade Honor Chair through the Company Honor Representative when the counseling option is chosen. The counseling record is retained until the midshipman’s graduation for use in the character development program should more than one counseling sheet be received. Failure to take one of the required courses of action constitutes a 5000-level conduct offense, the highest nonseparation offense level for a midshipman. The non-toleration clause is one of the most controversial elements of the honor codes. In 1975, we reported that the Military Academy’s studies indicated that non-toleration was one of the biggest problems for cadets and that toleration generally increased as a cadet progressed through his 4 years. Proponents of the non-toleration clause see self-policing as essential for making the honor code work effectively and to convincingly make the point that the individual has a duty to society that outweighs the bonds of friendship. Proponents have also stated that they do not see reporting one’s peers as contrary to societal norms when it comes to public service. They cite, as examples, the duty of a lawyer to report a subornation of perjury, the duty of a practicing engineer to report falsification of design data, and the duty of an airline crew member to report a pilot for unauthorized drinking. Despite these arguments, the non-toleration clause remains controversial. Critics point out that it requires a person to inform on his/her friends, which may conflict with a person’s individual sense of honor and personal integrity. These critics cite the following as support: • Douglas MacArthur, when disobeying orders to disclose the names of cadets guilty of hazing him, was quoted as saying: “My father and mother have taught me these two immutable principles—never to lie, never to tattle.”• A federal court has stated “we cannot fail to note that honorable students do not like to be known as snoopers and informers against their fellows, that it is most unpleasant even when it becomes a duty.” Beyond the question of the reluctance to inform on one’s peers, there is also some controversy with regard to the effectiveness of the clause. One critic has stated that since the large-scale cheating scandals were not discovered until they had encompassed a fairly large number of students, the clause may not be that effective. Some have also suggested that the non-toleration clause could actually contribute to large-scale cheating scandals because students could be deterred from turning in their peers for fear that those whom they turn in could retaliate by reporting them for past violations of the code. Finally, the non-toleration clause has been criticized as failing to recognize the importance of developing the ability in the students to exercise judgment and discretion about what should be done in any given case. At both the Military and Air Force academies, an honor violation can be reported any time up until the alleged offender graduates and is commissioned. Neither of their honor systems requires that an accuser report a violation within a specified period of time, even though failure to report a violation is considered to be toleration, which is itself an honor violation. Military Academy officials told us that cadets are expected to approach a suspected cadet within 24 hours and that another 24 hours is allowed for the individual to report to the honor representative. At the Naval Academy, a midshipman who suspects or becomes aware of a possible honor violation must take action within 21 days. The purpose of this reporting period deadline is to provide a potential accuser with enough time to approach a possible offender to confirm the violation and decide on an appropriate course of action, and yet avoid a situation where someone’s own past violation could be used to pressure him/her into ignoring another person’s violation. Allowing an unlimited time to report is also seen as potentially unfair in that it may require a midshipman to defend his/her actions in an incident that may have faded from the individual’s memory and the memory of other potential witnesses. Each of the academies provides accused students with legal counsel at no cost. The attorneys who counsel cadets accused of honor violations at the Military Academy are under the Staff Judge Advocate’s office, which is part of the Superintendent’s chain of command. At the Naval Academy, the legal advisor reports outside of the Academy’s chain of command to the Navy Judge Advocate General. A recent change at the Air Force Academy now has its defense attorneys reporting to the Director, Headquarters, U.S. Air Force, Trial Defense Judiciary. The placement of student legal counsel within the academies’ chain of command raises the issue of whether their independence may be compromised. This issue was raised in the 1976 cheating scandal at the Military Academy, when several Army lawyers counseling accused cadets complained that Military Academy officials were interfering with their efforts to defend their cadet clients. An investigation conducted by the Army’s Deputy General Counsel and the Chief Judge of the Army Court of Military Appeals concluded that several of the complaints of harassment of defense attorneys were well-founded. In any adjudicatory proceeding in which facts are in dispute, adjudicatory board members can never be completely certain about what happened. Instead, they must develop a belief about what probably happened. Sometimes, they may wrongly conclude either that an innocent person is guilty or that a guilty person is innocent. The relative frequency of these two types of errors is affected by the number or proportion of panel members who must be convinced that a violation occurred and how convinced they must be. In theory, the more people who must be convinced, and convinced to a higher degree of certainty, the stronger the evidence that would be needed for a conviction. Consequently, it is more difficult to convict in general. Conversely, the fewer the people who must be convinced of guilt, and the more doubt they are allowed to have about their guilty verdict, in theory the less evidence would be needed to convict. This situation would make it easier to convict innocent persons as well as the guilty. Therefore, two factors relevant to obtaining convictions are the degree of consensus required within the adjudicatory board and the required standard of proof. In a civilian criminal trial in most states, a jury must be unanimous with regard to a guilty verdict. In military trials (general courts-martial), two-thirds of the members must agree before a person can be convicted (except for offenses for which the death penalty is mandatory, in which case the verdict must be unanimous). The number of guilty votes needed for an honor conviction varies among the academies. A guilty verdict requires a two-thirds majority (six of nine) at both the Military and Naval academies and a three-fourths majority (six of eight) at the Air Force Academy. At the Military and Naval academies, only students serve on honor hearing boards, while at the Air Force Academy the board consists of seven student members and one field grade officer. Until 1994, the Naval Academy had required only a simple majority (four of seven) for a guilty finding. When we reviewed the academies in the mid-1970s, conviction of an honor offense required the unanimous vote of 12 board members at the Military Academy, 5 votes out of 7 board members at the Naval Academy, and a unanimous vote of an 8-member honor board at the Air Force Academy. Today’s less rigorous consensus requirements came into being because academy officials were concerned that too many acquittals resulted from the “not guilty” votes of one or two board members. The standard of proof determines the degree of certainty necessary in an individual honor board member’s mind before he or she should conclude that a violation occurred. It represents an attempt to instruct adjudicatory panel members concerning the degree of confidence they should have in the correctness of their conclusions. The standard of proof required typically depends on the nature of the case: • The standard of proof required in civilian criminal cases is proof “beyond a reasonable doubt.” With regard to degree of confidence in such a finding, this standard has been defined as “fully satisfied,” “entirely convinced,” and “satisfied to a moral certainty.” • The standard of proof ordinarily used in civil cases is “preponderance of the evidence.” This refers to evidence that is of greater weight or more convincing than the evidence that is offered in opposition to it, that is, evidence that as a whole shows that “the fact sought to be proved is more probable than not.” Use of the less stringent “preponderance of the evidence” standard reduces the risk that a guilty person will avoid conviction, but it simultaneously increases the risk that an innocent person will be wrongly convicted. Use of the more stringent “beyond a reasonable doubt” standard, on the other hand, reduces the risk that an innocent person will be wrongly convicted, while it increases the risk that a guilty person will escape conviction. The “preponderance of the evidence” standard, in setting the two kinds of risks as essentially equal, implicitly assumes that it is no more serious to convict an innocent person than it is to acquit a guilty person. Whereas, the “beyond a reasonable doubt” standard implicitly assumes it is far worse to convict an innocent person than it is to acquit a guilty one. This latter assumption is consistent with the principle derived from English common law that “it is better that ten guilty persons escape than that one innocent suffer.” The Naval and Military academies require that honor verdicts be based on a “preponderance of the evidence.” The Air Force Academy, however, uses the more stringent “beyond a reasonable doubt” standard. While there are a number of differences among the academy honor systems, there are also a number of similarities. For example, at each academy, • students are elected by their peers to serve on the honor committee and administer the honor system, investigations of alleged violations are conducted by students, • students are involved in determining whether an offense has occurred and not in determining what should happen to a convicted student, and the service secretary has the final decision on whether a cadet/midshipman will be separated. Another similarity is that the inferred intent of the accused is the key factor that determines whether an offense has occurred. For example, consider the offense of “lying.” There are two aspects to the offense. One is the question of whether what was said or indicated was, in a factual and objective sense, “true” or “false.” Making a false statement does not, in itself, constitute an honor violation. Rather, the determining factor is the individual’s intent. This leads to the possibilities shown in table 2.2. If a person is found to have committed an honor violation, academy officials determine what sanction should be applied. This determination requires a subjective assessment of whether the honor violation was an isolated incident not indicative of the individual’s true character (in which case the individual would likely be retained) or was an indication of an ingrained character flaw (in which case the individual would likely be separated). Historically, the punishment for anyone convicted of an honor offense was almost always separation. Over the last several decades, the authority of academy officials to impose sanctions other than dismissal has increased. Academy officials now consider such factors as how long the student has lived under the honor code/concept, whether the offense was self-reported, whether the individual admitted the offense, and whether there were any previous violations in determining the disposition of a case. Over the 3-academic year period 1991-94, the percentages of those who admitted or were convicted of honor offenses who were separated from the academies were 28 percent at the Military Academy, 20 percent at the Naval Academy, and 8 percent at the Air Force Academy. Based on a review of the rules and procedures governing the honor system and the views of academy officials, we assessed whether and how the honor system at each academy provided the various due process elements. Table 2.3 lists the due process elements and summarizes the results of our assessment. In general, the academies are fairly similar with regard to the due process protections their honor systems provide students. Overall, more than half of the due process rights are provided for in full by the academy honor systems, while there are limitations or qualifications on the extent to which the others are provided. The minimum amount of notice required to be provided to a student being charged with an honor offense varies from 2 working days at the Air Force Academy to 7 days at the Military Academy. If an individual has been charged with an honor offense, each academy relieves that person from most other obligations so that he/she can focus on preparing for his/her defense. We found no indications in any of the cases we reviewed or in any of the interviews with attorneys that students did not have adequate time to prepare their defense. Also, each academy indicated that students can request more time if needed. This element helps to ensure the fairness of hearings by subjecting them to outside scrutiny. In the case of honor hearings, the academies recognize an accused’s right to privacy. At all three academies, hearings are closed to the public at large. The Military Academy allows DOD personnel with official interest in the proceeding, cadets, and family to be present during the hearing. The Commandant has the discretion to allow others to observe if their attendance would not have an adverse effect on the fairness and dignity of the hearing or the cadet’s right to privacy. The accused’s attorney may be present during the entire hearing but must sit in the observer section and not represent the accused. At the Naval Academy, the hearing is not open to family or friends. Military and civilian personnel with ties to the Academy may observe hearings at the discretion of the presiding officer. The accused’s attorney is not allowed to attend the hearing, even as an observer. The Air Force Academy allows an accused to elect to have the hearing closed to observers. If closed, the accused may have his/her Air Officer Commanding present. If the hearing is open, cadets and academy faculty and staff may attend, and the accused’s attorney is allowed to attend the hearing as an observer. Family and nonacademy friends are not allowed to attend. Each of the academies has procedures aimed at ensuring that honor board members will be unbiased by prior knowledge, a close or antagonistic relationship with either the accused or a key witness, disposition, or belief. One of these procedures involves drawing board members from across the academy. In addition, each academy requires board members to recuse themselves if they feel that they cannot be impartial. While none of the academies allows “preemptory” challenges, each stated it considers any challenges for cause. Each of the academies allows an accused to make statements and present evidence. At the Military Academy, a hearing is usually recessed before final argument to allow an accused to prepare a closing statement. The accused may seek the advice of counsel in preparing the statement. At the Air Force Academy, an accused may request a recess to consult with counsel before making a closing statement. A midshipman accused of an honor offense at the Naval Academy has the right to make an oral or written statement before the honor board. However, if an accused makes such a statement, the honor board members may ask questions on the issues raised. Failure to respond to any questions may result in the instruction from the presiding officer that the board not consider the accused’s statement. Defense attorneys who have assisted accused students stated that the right to present argument is, in effect, somewhat qualified since students are not particularly skilled at presenting argument and are sometimes too emotionally involved to be able to make a cohesive and convincing case. Although the defense attorneys acknowledge that they are allowed to advise an accused in preparing for the hearing and during recesses, they believe their effectiveness is hindered because they cannot hear the testimony and present questions and argument firsthand. Each of the academies allows an accused student to present and question witnesses, directly or indirectly. Character witnesses, however, are generally not allowed. At the Air Force Academy, the accused’s questions are asked through the Group Honor Chairperson, while at the Military and Naval academies the accused student questions and cross-examines witnesses directly. Defense attorneys raised questions regarding the efficacy of students in cross-examining witnesses. The concerns they raised are that students • are too closely involved to question witnesses effectively; • are not skilled at quickly analyzing the answers they receive and asking • are sometimes intimidated when the witness is a commissioned officer; • often try to imitate lawyers they have seen on television and in movies, and they are generally not effective at doing this. One defense attorney discouraged students from cross-examining witnesses because it usually hurt them more than it helped. Another referred to the right to cross-examine as a “hollow” right since the accused students did it so poorly. While there is no formal “discovery” process, an accused is generally provided with copies of all statements and access to all evidence gathered in the honor investigation. An accused is free to gather additional evidence and obtain statements. One of the defense attorneys stated that he had encountered a problem with regard to access to all evidence when several accused students were involved. To protect the privacy of all of the accused students, each of the accused was given access only to the evidence and statements that were judged by academy authorities to be directly relevant to that individual’s case. In addition, some of the evidence that was provided was heavily redacted with the names and statements of other involved students removed. This raised a concern among the defense attorneys that some potentially exculpatory information may not come to the attention of the accused. Additionally, a concern was raised about delays in getting access to the evidence and official investigation reports. Each academy informs students accused of honor violations that they have a right to consult legal counsel and, as noted earlier, each provides attorneys to advise students free of charge. In addition, students may engage outside counsel at their own expense. The academies base their honor system proceedings on an administrative (or nonadversary) model. The nonadversary model involves the decisionmaker (who may be a judge or a board) learning about the case from an investigator, who is supposed to be neutral and present all aspects of the case. The decisionmaker tends to play a more active role in questioning witnesses. The investigator is not expected to act in a partisan manner or as a prosecutor. The defendant is expected to represent himself/herself. The adversary model, on the other hand, involves the decisionmaker learning about the case from the presentations of adversarial advocates, one representing the interests of the plaintiff or prosecution and one representing the interests of the defendant. Each advocate attempts to present facts that are favorable to the side he/she represents and may oppose each other’s presentations through questioning and rebuttal. The decisionmaker generally plays a relatively passive role in the questioning and witness examining processes, which is conducted primarily by the advocates. This is the model used in civil and criminal trials and in courts-martial. In the academies’ honor hearings, the role of legal counsel is limited to providing advice. Counsel is not allowed to represent or speak for the accused during the honor hearing or any of the reviews that may follow a finding of guilt. The reasons cited by the academies for not allowing legal counsel to speak for the accused include there is no prosecutor or government counsel presenting a case to the board; • students would resent the intrusion of attorneys into their honor system; • allowing the accused to be represented by counsel would likely lead to pressure for an attorney to represent the government’s interests; • hearings would become too legalistic and cause lengthy delays and increased processing time; and legal discussion of objections, evidence, and case law could confuse or intimidate the board. Defense attorneys raise the old adage, “He who represents himself has a fool for a client.” They believe that calling the hearings “nonadversarial” is window dressing and that contested hearings are very confrontational. According to one defense attorney, there is no situation more adversarial than when someone’s honor and character are called into question and, given the potentially life-long implications of being found lacking in honor, the accused deserves to be fully represented. Defense attorneys indicated that, while no one plays the role of prosecutor, the investigator who presents the evidence cannot realistically be considered neutral since the investigator’s conclusions about what occurred play a major role in determining whether a board is held and the official charges were drafted by the investigator. Since it is likely that the investigator believes that a violation has occurred, there is a danger that the investigator might inadvertently communicate that belief to the board. The honor boards are supposed to consider only the information that is presented at the hearing. There are no formal rules of evidence and any information considered reasonably relevant to the issues in question will typically be allowed. For the reviews that follow a guilty finding, additional information is considered. Information on the individual’s military, academic, and physical performance and conduct record is included in the review package. Each of the academies allows the individual to review and respond to the additional information. In addition, the individual may provide character reference statements for consideration at this stage. Each of the academies tape-records honor board hearings. The Naval and Air Force academies use these recordings to provide an individual who is found guilty with a copy of the verbatim transcript. At the Military Academy, an individual is given a nearly verbatim record of the board proceedings. None of the academies provide the individual with the rationale for the board’s decision. Academy officials said that board decisions are the product of the individual votes of the members and that each of them may have had different reasons for the way they voted. Academy officials also stated that this practice of not requiring board members to explain or justify their individual votes is consistent with the way criminal and civil juries operate. A finding of not guilty is not reviewable. Each of the academies has a multistep review process that each guilty verdict automatically undergoes. The review processes are intended to identify whether there were any legal shortcomings that may have worked to the disadvantage of the accused. The commandant or superintendent at each academy can overturn a guilty finding based on legal or procedural errors. In addition, the commandant and superintendent at the Military and Air Force academies are required to independently assess the sufficiency of the evidence supporting the guilty finding. While some of the reviewers may meet with the accused and others and conduct an informal hearing, they do not conduct a new hearing. In all cases where the academy recommends separation, the final decision is made by the service secretary. Cases are typically reviewed by the secretary’s legal counsel, and the authority to approve or reject the recommendation is generally delegated to an assistant secretary. The secretariat reviews consist of examining the reported findings as presented by academy officials and a statement from the accused. A new hearing is not conducted. At the Military Academy, the Staff Judge Advocate conducts a legal review of the case. The case then goes to the Special Assistant for Honor, who reviews it and makes recommendations to the Commandant, who, in turn, reviews the case and makes recommendations to the Superintendent. At the Naval Academy, the Commandant’s legal advisor reviews the case file and advises the Commandant with respect to sufficiency of evidence. The Commandant then reviews the case file and holds an informal hearing to determine the disposition of the case. If the Commandant recommends separation, the case file is forwarded to the Superintendent, through the Superintendent’s Staff Judge Advocate. A 1994 change to the honor process has limited the scope of the Commandant’s and the Superintendent’s reviews. Prior to the change, the Commandant and the Superintendent were both required to (1) independently weigh the evidence and judge the credibility of the witnesses, (2) determine contested questions of fact, (3) independently determine if the finding of a violation was established by a preponderance of the evidence of record, (4) approve only those findings that were correct in law or fact, and (5) consider matters in extenuation and mitigation. As a result of the change, the roles of the Commandant and the Superintendent are now limited to (1) reviewing the record and disapproving findings that are clearly erroneous, (2) disapproving findings from an honor board during which a procedural violation occurred that cannot subsequently be remedied, and (3) returning a case to the honor board or a new board to consider newly discovered evidence, in addition to the fourth and fifth responsibilities that were retained. Gone is the language requiring a full, independent review of the case. At the Air Force Academy, the Commandant reviews the case and recommends sanctions. The 10-member Academy Board reviews all cases when the individual has been recommended for separation. The academies cite their multilevel review processes as, in effect, constituting independent appellate reviews and point to the fact that verdicts have been overruled at the academy or secretariat levels as proof of independence. However, some defense attorneys question whether the reviews are truly independent. They believe that academy officials are often too deferential to the verdict of the honor boards for fear of arousing the resentment among the student body or charges of favoritism if a guilty verdict is overturned. Our review of some case files found occasional statements in transmittal documents from academy officials in the review chain who, although voicing considerable doubt about a given verdict, indicated they did not want to overturn a student board verdict. However, at each academy we found cases of verdicts being overturned by academy officials. Each of the academies provides students suspected of an honor violation with the right to remain silent, once they have been officially charged. This right is protected during an honor investigation by requiring that accused students be informed of the right to remain silent and acknowledge in writing that they have been informed of that right. The Naval Academy does not grant the right to remain silent before an individual is officially accused of an honor violation. Consequently, a faculty or staff member or another student can question a suspected student about an incident and that student would be expected to respond fully, even if it resulted in that student implicating himself/herself in a conduct or honor violation. Officials at the Military and Air Force academies indicated that cadets have no obligation to answer questions from other students or faculty members concerning a suspected honor violation. However, should the cadet elect to respond, it is expected that the reponse would be truthful. Air Force Academy officials also stated that a cadet may terminate any interrogation at any point and request legal counsel. Several defense attorneys stated that granting the right to remain silent only after the decision to file charges has been made essentially nullifies that right because the individual may have already been compelled to admit a violation. In addition, a defense attorney pointed out that Article 31, UCMJ, forbids anyone subject to UCMJ from compelling any person to incriminate himself or to answer any question that may tend to incriminate him. Since an honor violation could conceivably be charged as a violation of military law, that attorney indicated that requiring a person to provide a statement prior to an actual charge could itself be a violation of UCMJ. Defense attorneys also noted that one of the common criticisms of the honor systems is that they have been misused as a way of enforcing other academy regulations by requiring that students either admit to violations of rules and policies or risk escalating the offense into one that carries the potential punishment of separation. Sensitive to this criticism, each academy has identified certain kinds of questions such as “fishing expedition” questions or questions aimed at confirming something that is already apparent (e.g., asking an obviously intoxicated student whether he/she has been drinking) as being inappropriate and trivializing the honor system. However, each academy still requires accused students to answer the questions and to lodge a complaint about the inappropriate question later. A defense attorney indicated that this after-the-fact request for a review did not provide any real protection. None of the academies grants students an automatic right to have admissions or statements they may have made before being given the right to remain silent excluded from consideration in the hearing. However, the board hearing officer at the Military Academy, the honor board presiding officer at the Naval Academy, and the Group Honor Chairman or Chief of the Honor and Ethics Division at the Air Force Academy can exclude such statements or other evidence if they believe its use would be inappropriate or unfair. Defense attorneys and others have raised a number of additional criticisms and concerns about the academy honor systems. Among the concerns raised are that honor proceedings lack adequate standards of evidence, honor boards are too dependent upon subjective inferences of intent, students are penalized for conducting a vigorous defense, students have been expelled for trivial acts, honor punishments are sometimes disproportionately severe, and a separate honor system is not needed. Several defense attorneys mentioned the lack of formal evidentiary procedures as a problem. Because honor boards are considered administrative proceedings, formal rules of evidence are not applied. Defense attorneys said that they have seen hearsay, conjecture, and other forms of questionable evidence presented before honor boards. A related concern involved sufficiency of evidence. In many honor cases, particularly those involving the charge of lying, defense attorneys said there is relatively little “hard evidence” (such as physical or documentary evidence) that board members can directly examine on their own. Instead, much of the evidence is circumstantial or testimonial in nature—especially with regard to the key issue of intent. They said that this can be particularly problematic in cases involving the word of one person against the word of another, and they expressed concern that students have been found guilty based on nothing other than the testimony of their accuser.Such cases also illustrate the difference between the evidentiary requirements in academy administrative versus military judicial hearings. In a trial for “perjury,” the Manual for Courts-Martial states that no one can be convicted of that offense based solely on the testimony of a single witness. Only the Air Force Academy has a policy that states that an accused cadet who denies the charge cannot be convicted based solely on the uncorroborated testimony of another person. As noted earlier, the key factor in determining whether an honor violation has occurred is the inference drawn about the intent of the individual. Defense attorneys questioned whether students in their late teens and early 20s have the maturity of judgment and perspective to make such highly subjective judgments where the consequences can taint an individual for life, noting that it seemed ironic that the honor system was virtually the only area of academy life where academy authorities treated students as though they were responsible adults. Questions have also been raised about the students’ ability to determine who is telling the truth and who is not. Attempts to detect deceit are typically based on the assumption that telling a lie is readable in a person’s involuntary physiological responses. In cases where most, if not all, of the evidence is testimonial and circumstantial in nature, achievement of just outcomes is highly dependent upon the board’s ability to determine who is telling the truth. Ekman and O’Sullivan (1991) recently reviewed the research literature on the ability of people to detect lying. They concluded that 20 years of research in this area indicates that little confidence should be placed in judgments, by laymen or experts, about whether someone is lying or telling the truth. Over all the studies, the average accuracy in detecting deceit has rarely been above 60 percent (with chance being 50 percent), and college students have tended to do worse than others, sometimes choosing less accurately than chance. One defense attorney stated that accused students were, in effect, penalized for conducting a vigorous defense and trying to prove their innocence. This reportedly occurs because academy officials tend to take the admission of guilt and the expression to willingness to accept the consequences as the primary evidence of remorse and commitment to live honorably. This sets up the ironic situation where, given the same circumstances, a guilty person is more likely to be retained at the academy than an innocent person. The reason for this is that an innocent person with a high sense of honor would probably be unwilling to falsely admit guilt and claim to have learned a lesson from the incident, which would tend to be interpreted by academy officials as lack of remorse. The guilty person, on the other hand, would probably be more willing to make such an act of contrition, especially if he/she were not really sincere. Our review of the documents in honor case files indicated that inferences about the remorse of the convicted person is an important factor in determining the recommendations of academy officials regarding the disposition of the case. Also, many of the recommendations in the files stated that the continued insistence that the accused did not intentionally commit an honor violation was an indication of lack of remorse. One criticism of the honor systems is that they make no distinctions among offenses by degrees of seriousness. Critics point out that students have been found guilty and expelled from the academies for trivial offenses. In a 1974 book, a former West Point psychiatrist cited cadets being forced to resign or expelled for honor offenses such as quibbling over status as a nonvirgin, telling a squad leader that shoes were shined 4 hours before inspection rather than the night before, falsely claiming to own a Jaguar, and falsely telling other cadets his cookies were gone when he still had some left. One defense attorney noted that some punishments appear disproportionate to the offense, particularly when one looks at punishments across adjudicatory systems. We were referred to the following two Naval Academy cases that were adjudicated in the same year by the same academy officials. One case involved the honor system. A plebe (freshman) was being questioned while serving noon meal to the upperclass midshipmen at his table. An upperclassman asked him what he had done over the weekend to improve his physical fitness. Although under no obligation to have engaged in physical conditioning, the plebe answered that he had gone running on Sunday. In response to follow-up questions, he cited where and when he had run. He then asked to discuss it later with the questioner. When his request was denied, he stated that he had answered incorrectly and that he had not been running. He was charged with the honor offense of lying, was found guilty, and was separated from the academy. The other case involved the conduct system. Several midshipman went to a Navy athletic contest at another university. They had been drinking prior to the game at the home of one of their classmates. After the game, one of the midshipmen (a sophomore) physically struck a woman in a wheelchair in a university dormitory. He was picked up by campus police and later released into the custody of several classmates. He then went into the local community where he encountered a 12-year old girl who was babysitting for her next-door neighbor. He began to curse and verbally abuse the girl, and he struck the girl’s mother when she told him to leave. He then attempted to follow the girl into the house where she was babysitting. He broke into the house by kicking in a plate glass exterior door. Once inside, he broke several windows and was found passed out on the floor by the police and arrested. He was found guilty of five conduct offenses at the highest level of seriousness and a lesser offense of underage drinking. He was retained at the Academy. While stating that the services have a legitimate interest in the honesty and integrity of the officer corps, a defense attorney stated that it does not necessarily follow that a rigid honor system, imposed only on the academies, is a reasonable way for the services to try to assure the honesty and integrity of the entire corps. He noted that 85 to 90 percent of officers were commissioned through programs that have nothing comparable to the academy honor codes. He noted that the courts used essentially this same line of reasoning in striking down the mandatory chapel attendance requirement that each of the academies used to impose on cadets and midshipmen. He also stated that, since virtually any significant offense under the honor code was also an offense under UCMJ,a separate honor system was not needed. Our 1994 survey of students at the three academies found that they generally saw their honor systems as fair. Determination of what constitutes an honor violation is not as straightforward as the wording of the codes implies. It is unclear what is or is not an honor violation since an individual’s intent is the key determining factor. Some students see honor as “black or white” while others see gradations. Also, there is some confusion regarding whether some acts are honor violations or conduct violations. Some students see the demands of the honor system as conflicting with personal loyalty. Many students at each academy are reluctant to report honor violations. Students also perceive that the honor standard is higher at the academies than it is among active duty officers. Over their 4-year academy careers, student views toward honor appear to become less positive. Several questions assessed the perceptions of cadets and midshipmen regarding the fairness of the honor system. Overall, academy students saw the system as reasonably fair. However, a considerable proportion saw a need for officer involvement and adherence to due process protections, and most did not believe that all violators should be expelled. In addition, many students indicated some concerns about the honor system being used to enforce regulations and as an easy way to remove someone from the academy. As shown in figure 3.1, more than half of the students at each academy believed that the honor system was administered fairly and impartially. However, a sizeable minority of 23 to 31 percent disagreed. The students were split concerning whether honor violation punishments were generally appropriate to the offense. From the wording of the question, it is not possible to determine whether those who did not see honor punishments as appropriate believed them to be too harsh or too lenient. However, responses to another question on punishments indicated that most students did not want to see the harshest punishment (dismissal) imposed for every honor violation. When asked whether anyone found to have committed an honor violation should be expelled, only 14 to 29 percent agreed while 51 to 69 percent disagreed. “I feel our honor code can not be held higher than the U.S. Constitution. All midshipmen still maintain their American rights.” While the honor codes/concept appear to be simple and straightforward in their wording, in actual practice, determination of whether or not an honor offense has been committed is much more subjective and greatly depends upon what inferences are drawn concerning the intent of the cadet/midshipman in question. We developed a set of 27 short scenarios to determine the extent of agreement regarding what was or was not considered an honor violation. The scenarios dealt with all three aspects of the honor codes/concept (lying, cheating, and stealing). Some scenarios were derived from actual honor case situations while others were hypothetical. The scenarios were intentionally focused on “grey area” situations. We also included a couple of scenarios that we knew, based upon advice from academy officials, were not honor violations. The officials at each academy who were responsible for the honor programs assessed each of the scenarios regarding whether it was likely to constitute an honor violation. The 27 scenarios and the assessments across the three academies are shown in appendix III. Allowing for the absence of sufficient information in some of the scenarios to allow definitive determination of the individual’s intent and the subjectivity inherent in such determinations, there appeared to be at least some differences among the academies regarding whether specific acts were violations of their honor systems. In some cases, a given act (such as taking a joyride in a government vehicle) was considered by academy officials to be a conduct violation rather than an honor offense. Other differences were the result of specific academy policies. For example, the Military Academy has a policy that instructors not give the same exam to different class sessions, which makes it permissible to ask a friend what was on the exam. Figure 3.4 shows the percentage of students at each academy who indicated that a specific scenario was either definitely or probably an honor violation. As can be seen, there is little agreement among the students at each academy with regard to what does or does not constitute an honor violation. “The absolute nature of the system makes it difficult for graduates to differentiate between insignificant moral problems and those of great moment, for within their frame of reference it is the form of the situation which matters. Ethical acumen is discouraged where honor and integrity are defined in clear-cut, black-or-white terms. As the cadets are told at their orientation talks, honor is like virginity—you’ve either got it or you don’t”. Academy students were basically split with regard to whether all honor offenses were equally serious (see fig. 3.5). About 40 percent at each academy indicated that any violation of the honor code/concept was significant, while about 40 percent saw some honor violations as less serious than other violations. Example comments follow. “There are no ’LITTLE WHITE’ LIES SIR.” (Naval Academy midshipman) “I think your questions on honor situations contain too many black and white answers. Honor is not clean cut.” (Air Force Academy cadet) The scenario items offer some clues regarding what kinds of acts are more likely to be seen as violations. • Deceptive acts involving official reporting or accountability issues (such as falsifying a roster, shading a report, or using a false identification) had a higher percentage of respondents indicating it was an honor violation than acts that involved only personal issues (e.g., lying about having a date). • A lie told to benefit the teller or take advantage of someone was more likely to be seen as an honor violation than if it was told to benefit someone else. • Scenarios that involved gaining an unfair academic advantage (e.g., getting unauthorized help on a homework assignment) were likely to be seen as honor violations. • Scenarios involving direct verbalized deception were more likely to be seen as honor violations than were scenarios in which the deception was indirect or implied, but not verbalized. For example, while a cadet/midshipman who is below the legal drinking age and who orders an alcoholic beverage could be seen by some as falsely implying that he/she is entitled to be served, as long as the individual did not verbally claim to be of age or present a false identification, many respondents saw no honor violation. Many academy students (from 23 percent at the Naval Academy to over 40 percent at the other two academies) saw toleration of an honor offense as much less serious than other offenses (see fig. 3.6). Toleration was more likely to be seen as a less serious offense at the two academies with a non-toleration clause than it was at the Naval Academy where toleration is a conduct offense, not an honor offense. “The toleration clause of the honor code is only teaching us to be little tattle tales. Sounds childish, but we are treated like children, so it fits.” (Military Academy cadet) “The problem with the honor code itself is not the code—it is the way the toleration clause is enforced. There is no leeway for a cadet to confront another cadet about something—counsel them and leave it at that. If a friend of mine makes a dumb mistake—by regulation I have to turn him in. I can’t talk to him and solve the problem from there. Everything has to go to a board. I think that’s wrong and rather than admit I saw or witnessed a violation by counseling the person myself, I’m not going to run the risk of getting a toleration hit and I’m going to pretend I never knew a thing.” (Air Force Academy cadet) The set of honor scenario items generated extensive write-in comments from the respondents. Most of these comments indicated that the scenarios did not provide enough information to make a definitive assessment of the individual’s intent and the respondents questioned the validity of any conclusions based on the scenario questions. Typical examples of the comments follow. “From what we are taught, honor violations are determined upon the intent of the possible violation. From the questions posed in this questionnaire, we have no information or knowledge of their intent. Its almost presuming guilty before being proven innocent. Only some of the questions are like this. Others gear us to the “right” answer by how they are worded.” (Military Academy cadet) “There are lots of gray areas in several of these questions. The biggest thing I look at before turning someone in is INTENT. Not everything is black and white. Definitely there are actions that are WRONG and should never be covered up but intent is the biggest determinant.” (Naval Academy midshipman) “The answers I have given throughout the survey often depend on situation, intensity, etc. I hope that is taken in to account when these results are reviewed. Each question lacks the specific context that may make the results more accurate or reliable.” (Air Force Academy cadet) We agree that many of the scenario items did not include a specific indication of the person’s purpose or intent, but at least half of the items did provide such an indication. We believe, however, that the respondents’ comments serve to confirm the conclusion that the determination of what constitutes an honor violation is not clear-cut. Rather, as noted in the previous chapter and stated in many of the comments, determination of an honor offense depends upon the inference that an observer forms regarding the individual’s intent. For example, while taking a bed sheet from the laundry to make a “spirit” sign has the effect of a theft on the rightful owner of the sheet, if the “intent” of taking the sheet was seen as a prank then this act would probably not be seen as an honor violation. Since different individuals can draw different inferences from the same set of observed facts, determination of an honor offense is highly subjective. A second common criticism that respondents cited in their write-in comments about our scenarios was that we were apparently confused regarding what constituted an honor violation versus what merely constituted a violation of regulations. For example, several respondents stated that covering room windows and stuffing a towel under the door to avoid detection for violating lights-out policy is a conduct offense. They saw this as an attempt to avoid detection, not as an attempt to deceive authorities into believing that the lights were out. In reviewing the Naval Academy’s serious conduct offenses for the 1990-91 school year, we found more cases involving theft that were dealt with using the conduct system than with the honor system. These cases included • stealing Logs (the Academy’s humor magazine), • wrongfully appropriating a motor vehicle, • stealing by making unauthorized credit card phone calls, • stealing from the Midshipmen’s Store, • stealing property of Citadel cadets, • stealing Navy property, • stealing $4.96 in merchandise, • assisting in transporting and concealing stolen stereo equipment, • stealing a check and cashing it, and • stealing money and credit cards from other midshipmen. In addition, two cases of stealing were handled using court-martial procedures. These cases involved stealing • a watch, a ring, and cash from the hotel room of a retired Army general and his wife, and • $1,500 worth of stereo equipment from fellow midshipmen. During that same period, we found six other cases that were dealt with under the honor system. These cases involved stealing • a fellow midshipman’s weapons project, • an exam, • a homework solutions manual, • money from a wallet, • a bracelet, and • 21 library books from the Academy library. We could find no explanation or criteria for determining whether a given act would be pursued using the honor system, the administrative conduct system, or the military justice system. The honor codes/concept do not prohibit all unethical acts or practices. Some of the respondents acknowledged this in pointing out deficiencies in various scenario questions. For example, we asked about the situation where an academy student used a paper from a study file and, while not copying any of it verbatim, paraphrased it completely. Several respondents wrote comments that whether this would constitute an honor offense depended upon whether the cadet/midshipman in question had cited the use of the study file paper. For example, one Military Academy cadet wrote, “Some underclass cadets might not know the difference between an ethics violation and an honor violation. You must clarify if receiving help or paraphrasing is documented or not.” Some respondents acknowledged that the hypothetical students in some of the scenarios behaved inappropriately, but that did not constitute an honor offense. Examples of comments made by Military Academy cadets follow. “Regarding the cadet paraphrasing the paper (for example), it would only be an honor violation if he failed to document his source. Otherwise, it is just unoriginal thought that deserves a bad grade.” “Although this is not morally correct, the cadet is not required to return the money. However, I feel he/she should make a reasonable attempt at finding the owner and returning said money.” “Most of these are ethical dilemmas, not honor questions.” “Many things listed would be wrong, possibly unethical, but not an ‘honor’ violation.” We asked respondents several questions aimed at identifying how they personally defined honor and whether they saw any conflict between the demands of the honor system and loyalty to friends (see fig. 3.7). Half or more of the students at each academy indicated that duty was the highest form of honor. Also, a sizeable minority of students at each academy indicated that loyalty was the highest form of honor, that the honor system conflicts with the emphasis on being a team player and personal loyalty by requiring students to turn in their fellow students, and that personal loyalty should take preference over rules and regulations. “Because of the way I was brought up, it is hard to deal with the Honor Code. I was taught that is was okay to cover up things for friends and many things along those lines. I don’t think that is dishonesty.” (Military Academy cadet) “Loyalty to your friends is much more important than enforcing military standards. If you are in a war, shined shoes won’t save your ass. Friends will.” (Air Force Academy cadet) “I think the main reason why the Honor Concept may not be applied in some circumstances is that it conflicts with other values learned at the Academy. Teamwork, and personal loyalty are two such values. It is hard to put someone in jeopardy, when one is taught not to ’bilge’ , or backstab, another midshipmen. It is especially hard for classmates to punish one another, as one often views his/her class as one big team or family.” (Naval Academy midshipman) “I would rather have a loyal friend by my side during combat than one who has passed muster at the Naval Academy as being honorable - we are here to lead men in combat and honor has nothing to do with it.” (Naval Academy midshipman) “Many peoples’ morals are eroded over time while they are here and an unfortunate casualty includes their personal honor. This erosion comes from wanting to be part of the group and putting loyalty to them (team, company) over their personal integrity and standing up for what’s ’the right thing to do.’ If they do break with the group, they’re ostracized. I know, I was one of those.” (Naval Academy midshipman) We asked students several questions aimed at assessing their willingness to report honor violations. The proportion of students indicating they would not turn in a close friend for a possible honor violation was 37 percent at the Military Academy, 30 percent at the Air Force Academy, and 29 percent at the Naval Academy (see fig. 3.8). The responses could mean that students are willing to report honor violations only if they are sure that an honor offense has been committed. However, since about one-quarter of the students at each academy indicated they would not turn in a close friend for a clear-cut honor violation, it would appear that many students are simply unwilling to report their friends for honor violations. To get another assessment of student willingness to report honor violations, we examined the responses of those students at each academy who thought each scenario either probably or definitely was an honor violation. We also asked how likely it was that they would report someone in their unit for a possible honor violation if they had direct knowledge, after approaching for clarification, that the individual had committed the act described in the scenario. Midshipmen’s responses do not necessarily mean that the respondent would take no action since the Naval Academy honor system provides a “counsel and not report” option for handling an honor offense. However, since the honor codes at the Military and Air Force academies provide no other option than to report honor offenses, these results raise significant questions regarding student support for the non-toleration clause at these academies. As shown in figure 3.9, the proportion of students indicating they would probably or definitely not report the individual varied significantly from scenario to scenario, again indicating that many students see different degrees of seriousness depending on the nature of the specific offense. Overall, an average of 30 to 34 percent of those students who saw various scenarios as either probably or definitely constituting an honor offense indicated that they probably or definitely would not report a student in their companies or squadrons. Write-in comments indicated that reluctance to turn in peers for honor offenses stems from a variety of reasons, such as loyalty to one’s friends, unwillingness to contribute to the destruction of someone’s life, belief that almost everyone has violated the code at some point in their academy career, concern that minor violations can result in disproportionate punishment, and the ostracism that can result from turning in a peer. The following are examples of some of the students’ comments. “I like to think that I’m honorable, but on the same token I cannot envision myself turning in a friend for a violation. I would definitely approach him and discuss it, but I probably wouldn’t turn him in.” (Naval Academy midshipman) “The hardest part about the honor code is that turning someone in and ruining their life would be an extremely hard choice to make.” (Military Academy cadet) “We all make good and bad decisions in life. However, to destroy a career over some of the things that happen here probably makes us suffer as a whole in the long run.” (Naval Academy midshipman) “Pertaining to the honor questions, I would never turn in somebody for honor violations because I would not want to be responsible for ending somebody’s career. I will always give them a second chance.” (Naval Academy midshipman) “The honor concept really needs to be looked at. If you interview midshipmen, most would tell you that it is strictly adhered to, but it is not. I would seriously doubt anyone graduates without committing some sort of H.O. The H.C. is used as a scare tactic and to keep others under control. Personally I hate it with a passion and would never, ever take part in its proceedings no matter how serious the offense was.” (Naval Academy midshipman) “The problems that many mids face, including myself, when deciding whether or not to report somebody has to do with what exactly the offense was. I would generally try to counsel first, and only as a last resort would I turn somebody in. However, even then I would be hesitant to do so unless it was a serious honor violation. There are many times when technically something is an honor violation but it is almost ridiculous to report.” (Naval Academy midshipman) “I was part of the people who turned in the EE crew. All I got was hardship, pain, and hatred from everyone in the hall. I tell you it was not worth it.” (Naval Academy midshipman) We also looked at the responses to other questionnaire items to see if those who indicated they would report a violation could be distinguished from those who indicated they would not. Reluctance to report was not related to class, gender, race, or ethnic background. We found that students who were less willing to report violations were more likely to do the following. • Draw distinctions among honor violations by degree of seriousness (i.e., they tended to indicate that not all honor violations were equally serious; that toleration of an honor offense was less serious than lying, cheating, or stealing; and that not all honor offenders should be expelled). Indicate less trust in the fairness of the honor system (i.e., they tended to indicate that the honor system was not administered fairly and impartially, that honor punishments were not appropriate to the offense, and that they did not fully trust the honor investigators). • Perceive that the honor system was misused (i.e., they tended to see the system used to enforce regulations and as an easy way to remove someone from the academy), and • Place greater value on loyalty to peers (i.e, they tended to see loyalty as the highest form of honor, indicate that loyalty to friends should take precedence over rules and regulations, and to see conflict between the honor system and the academy’s emphasis on being a team player and personal loyalty). We asked respondents whether they agreed or disagreed with the statement: “The concept of honor is much more stringent at the Academy than it is among active duty officers.” The percent of students agreeing or strongly agreeing was 66 percent at the Air Force Academy, 61 percent at the Military Academy, and 46 percent at the Naval Academy. This could indicate either a cynical view of the degree of honor on active duty or academy students see themselves as being held to a higher standard. Some of the student comments on this issue were quite strident. Examples such as the following reveal considerable depth of feeling concerning a perceived double-standard regarding honor at the academy and honor on active duty. “We use someone else’s words and ideas and its called cheating. The Supe [Academy Superintendent] uses someone else’s words and ideas and they call it a great speech. That’s how it works in the real world.” (Naval Academy midshipman) “We follow the Code out of fear while we are here. But most of us will fall right into line with all the career protectionism crap when we go on active duty.” (Military Academy cadet) “One need not look further than the Space Command’s treatment of the officers who dared to tell the truth about the programs the Air Force wanted, to see that honor doesn’t count for much in the real Air Force.” (Air Force Academy cadet) “If we dissemble or quibble, we’re gone. If a general does it to a congressional committee to get some new weapon system, he gets promoted. Just another case of ‘Do as I say, not as I do’.” (Air Force Academy cadet) We asked respondents about their perception of the frequency of academic cheating (see fig. 3.10). At the Military Academy, 11 percent disagreed with the characterization of cheating as “extremely rare,” as did 35 percent at the Naval Academy and 40 percent at the Air Force Academy. Thus, according to the perceptions among cadets and midshipmen, cheating may be more prevalent than the occasional scandals make it appear. As shown in the figure, about half or more of the students at each academy saw the twin pressures of academics and inadequate time as likely causes of cheating. However, since 54 to 70 percent of cadets/midshipmen indicated they did not have sufficient time to satisfy all the demands made on them and 44 percent to 65 percent indicated they did not have sufficient time for their academic studies, such pressures appear to be a fact of academy life. In its December 1993 report on honor at the Naval Academy, the Honor Review Committee of the Naval Academy Board of Visitors stated that midshipmen’s attitudes toward honor appeared to become increasingly cynical over their 4 years at the Academy. To see if this observation also held at the other academies, we compared the responses of the Class of 1994 to our surveys conducted in 1990-91 with that class’ responses in 1994. Since both the 1990-91 and 1994 administrations involved random samples, we believe each provides a reliable assessment of the prevailing attitudes among the members of that class at those two points in time, even though the same individuals were not necessarily included in both samples. The data support the observation that attitudes of first class (senior) students at each academy appeared less positive toward the honor system than they were as fourth class (freshmen) students. In particular, members of the Class of 1994 became less likely to indicate that honor was well respected, less willing to report a close friend for either a possible or a clear-cut honor violation, and • more likely to see honor as more stringent at the academy than among active duty officers. There was also a tendency for students in the Class of 1994 to see fewer of the honor scenarios as violations in their last year at the academy than they did in their first year. However, according to academy officials, this result could represent the first class (seniors) having gained a more thorough knowledge of the intricacies of the honor system and the elements of proof needed to determine that a violation has occurred, which can result from living under the system for 4 years. In light of these findings, it is interesting that some elements of the academy honor education programs appear to take hold over the 4 years. Senior students were less likely than they were as freshmen to indicate that loyalty was the highest form of honor, loyalty should take precedence over rules and regulations, and the honor system conflicts with the academy’s emphasis on teamwork and personal loyalty, In some ways, Class of 1994 students at the Military and Air Force academies also appeared to become more “hard-line” regarding honor over their 4 years. For example, the percentage indicating that honor offenders should be expelled and the percentage indicating that there was no such thing as a minor honor violation increased from when they were freshmen. Codes of conduct at all three academies define acceptable cadet behavior as adherence to civilian laws, UCMJ, and service and academy directives and standards. Students who violate the academies’ conduct standards may be subject to an administrative disciplinary hearing, where determinations of fact are made concerning the alleged misconduct. The academies characterize their disciplinary systems as correctional and educational rather than legalistic or punitive. Their goals are to instill in the cadets and midshipmen the desire to accept full responsibility for their actions and to place loyalty to the service above self-interest or friends and associates. The conduct system at each academy consists of two types of reviews: reviews of specific violations and reviews of overall records for cadets/midshipmen who are deficient in conduct. Each conduct system and related adjudicatory processes are based essentially on similar principles of conduct and character development. However, the systems and processes vary considerably across the three academies. There are five levels of conduct adjudication at the Military Academy. These are, in increasing order of severity, award of demerits, company boards, regimental boards, hearings involving violations of Academy regulations, and court-martial hearings involving violations of UCMJ. Demerits are awarded for minor infractions of cadet regulations, for example, not shining shoes properly. Cadets are allowed a certain number of demerits per month, depending upon their class. Once this number is exceeded, cadets must serve one punishment tour per demerit in excess of the monthly allowance. Company boards may award punishments of up to 20 demerits and 20 punishment tours for infractions such as being late for class through neglect. A company board is not considered to be a major disciplinary proceeding. A regimental board, convened for such offenses as leaving post without authority, is considered to be a major disciplinary proceeding. A regimental board may award punishments of up to 35 demerits, 100 punishment tours, and 4 months’ restriction to specific areas (typically a cadet’s own room, the nearest latrine, and the orderly room). If a cadet gets three regimental boards during his/her cadet career, an investigating officer is appointed to review the board proceedings and recommend action to the Superintendent. A hearing for suspected violations of Academy regulations is the most serious level of administrative adjudication and may result in a cadet being separated. Court-martial is reserved for serious offenses that are considered clearly criminal, such as sexual assault, fraud, and so forth. At the Naval Academy, conduct offenses are categorized into six levels of seriousness, 1000 through 6000. Levels 1000 through 3000 cover infractions, such as failure to have door open when a room is occupied, unauthorized use of an official telephone, and unauthorized absence of 30 minutes or less. Punishments for these levels are awarded by commissioned officers at the company level. The remaining levels, 4000 through 6000, involve more serious infractions, such as intentional failure to perform a duty, sexual misconduct, and hazing. Punishments for offenses at these levels are determined at the battalion level or higher. Each midshipman is allowed a certain number of cumulative demerits per year or over his/her career, depending upon class. Based on these demerit levels, midshipmen are given a letter grade for their conduct. The three levels of conduct standing are proficient, deficient (exceeding two-thirds of the annual allowable demerit total), and unsatisfactory (exceeding the annual or cumulative demerit allowance). Low conduct grades can result in a hearing to determine if the midshipman should be allowed to continue at the Academy. At the Air Force Academy, conduct violations are categorized into four levels of seriousness: A, B, C, and D. For class A conduct offenses, such as a minor uniform appearance violation, the awarding authority for punishment lies within the cadet chain-of-command. Class B offenses, such as being absent from class, and class C offenses, such as being outside cadet limits without permission, are adjudicated by a cadet’s Air Officer Commanding and the group Air Officer Commanding, respectively. Class D offenses, such as drug or alcohol abuse, sexual misconduct, and hazing, are the most serious level of misconduct and may constitute grounds for involuntary dismissal. For violations of UCMJ, the Commandant of Cadets can initiate article 15 or court-martial actions. But most class D cases were normally adjudicated by a Cadet Disciplinary Board. Recommendations for involuntary separation were reviewed by the Military Review Committee, a standing committee of the Academy Board. In September 1994, the Air Force Academy proposed to the Secretary of the Air Force that the Cadet Disciplinary Board be replaced with a streamlined Military Review Committee hearing. The objective of the proposal was to streamline, ensure due process, and align Academy disenrollment procedures for discipline and aptitude more closely with Air Force discharge procedures. The Secretary approved the proposal as of January 1, 1995. The number of conduct hearings held varied greatly from academy to academy. Because of differences in the ways each academy categorizes and handles conduct offenses, the rates of misconduct hearings and the dispositions of those cases are not comparable. For academic years 1991 through 1994, the Military Academy had 30 cases in which cadets had been accused of serious cases of misconduct and were investigated under the provisions of Regulations, USMA. About 17 percent of the cases were dropped before hearings. Of the 25 cadets that had formal hearings, 15 (60 percent) were found guilty. Ten (67 percent of those found guilty) were separated. The Naval Academy had 147 serious (potential separation level) misconduct cases during academic years 1991-92 and 1992-93 and the first semester of 1993-94. Of those cases, 32 (about 22 percent) were dropped before a hearing. Of the 115 midshipmen that had hearings where final dispositions have been made, 84 (about 73 percent) were found guilty. Thirty-two midshipmen (about 38 percent of those found guilty) were separated. The Air Force Academy had 139 serious (class D and UCMJ) misconduct cases during academic years 1991-94. Of those cases, 8 (about 6 percent) were dropped before a hearing and 7 were still pending a decision at the time of our review. Of the 124 cadets that had hearings where the final dispositions were known, 99 (about 80 percent) were convicted. Twenty-five cadets (about 25 percent of those found guilty) were separated. The due process protections available to cadets and midshipmen who are charged with serious conduct offenses vary across the academies and are somewhat different from those provided in honor cases (see table 4.1). However, many of the due process concerns raised by defense attorneys with regard to honor hearings are seen by those attorneys as also applicable to administrative conduct hearings (see ch. 2). The minimum amount of notice required to be provided to a student charged with a serious conduct offense varies from 3 days at the Air Force Academy to 7 days at the Military Academy. Air Force Academy officials told us that while, there was no specific minimum notice for serious misconduct offenses, every effort was made to close the Air Force Cadet Wing Form 10 (the form used to report conduct offenses) as soon as possible. Academy officials also said that they notified an accused orally, and not in writing, and an accused could not get additional time to prepare for a hearing because the accused was fully aware of the charges pending against him/her. At the Naval Academy, we were told that while an accused has a minimum of 5 days to prepare for an investigative hearing, as a practical matter an accused tends to have more notice for more serious offenses. Generally, the Conduct Office has 11 working days to generate a formal charge; 18 days for an investigative hearing; 23 days for a Commandant’s hearing; and 25 days (5 weeks) for a Commandant’s memorandum. The Military Academy limits attendance to DOD personnel, cadets, and family. Other persons may be admitted to observe a proceeding, at the discretion of the Superintendent, if their attendance would not have an adverse effect on the fairness and dignity of the proceeding or the respondent’s right of privacy. The Naval and Air Force academies also close their administrative conduct hearings to the public at large. The Air Force Academy permitted observers (usually the future board membership pool) to attend all or part of the hearing at the discretion of the board president and the cadet chain of command was allowed to sit in during testimony. The Naval Academy does not allow the accused’s family or friends to attend the hearing. One major difference among the academies is the nature of the misconduct tribunal. At the Naval Academy, a single investigating officer collects the evidence, holds the hearing, and makes recommended findings. The Military Academy’s regimental board consists of the Regimental Tactical Officer. As mentioned previously, the Air Force Academy had a Cadet Disciplinary Board, which consisted of four officers and three cadets. As of January 1, 1995, the Air Force Academy replaced that board with a two-step process. When a cadet is suspected of serious misconduct, an inquiry may be conducted by the Security Police, Commander, or by an appointed inquiry officer. At the conclusion of the inquiry, the Commander may opt for cadet punishment or may recommend disenrollment. If disenrollment is recommended, the case will be forwarded to the Military Review Committee for fact-finding and a recommendation of disposition. At both the Military and Naval academies, a cadet/midshipman can challenge the investigative officer for lack of impartiality or failure to qualify as an investigative officer. This challenge will normally occur before the fact-finding portion of the investigation, but may be done during any portion of the investigation when the respondent discovers possible grounds for challenge. At the Air Force Academy, the board president and board members had certain procedures to follow regarding the circumstances under which a member would be considered not to be impartial. The accused could not directly challenge board members for bias, although the accused could present facts demonstrating that a board member was biased. The board president made the determination as to whether a board member would be excused for bias. At the Military Academy, an accused may make an unsworn opening statement before the fact-finding portion of the investigation begins. At the conclusion of the hearing, an accused can make an unsworn argument to the investigating officer on the merits of the allegation and about possible recommendations by the investigative officer. The Naval Academy allows accused midshipmen the right to present their own argument. The accused may receive assistance from his/her attorney on overall presentation strategy. An accused at the Air Force Academy did not present argument to the board members. However, an accused had the right to make an opening statement at the hearing. After the opening statement, witnesses were brought in and questioned by the board about their written testimony. An accused could make a closing statement to clarify any testimony or answers to questioning. At both the Military and Naval academies, an accused cadet/midshipman may call witnesses, present evidence in his/her own behalf, and cross-examine all witnesses. However, at the Naval Academy, if an accused questions witnesses, the accused may be questioned. Also, an accused midshipman needs permission for character witnesses to testify on his/her behalf. During a Cadet Disciplinary Board hearing at the Air Force Academy, only the board members could cross-examine witnesses. An accused cadet could not question opposing witnesses directly, but could submit evidence, names of prospective witnesses, and questions to the board president. The board president had the discretion to call witnesses to testify. At all three academies, an accused is entitled to a copy of all documents and witness statements in the case file. An accused is also given the names and addresses of all witnesses expected to testify at the hearing. Each academy informs students accused of serious conduct offenses, when dismissal is a possibility, that they have a right to legal counsel. For purposes of consultation, an accused may obtain civilian counsel at his/her own expense, consult with military counsel provided free of charge by the academy, or do both. The right to counsel, however, is limited to advice given outside of the hearing. An accused’s counsel may be present as a spectator only at the Military Academy. The conduct hearings at each academy are supposed to consider only that information that is presented at the hearings. Since these hearings are considered administrative, not judicial, there are no formal rules of evidence and any information that is considered reasonably relevant to the issues in question will typically be allowed. None of the academies provides a convicted cadet/midshipman with an explanation of the rationale for their decision and sanctions. A convicted cadet at the Military Academy receives a summarized record of the proceedings and findings, which is authenticated and certified by the investigating officer, a copy of the Staff Judge Advocate’s legal review, and the Commandant’s recommendation. At the Naval Academy, a convicted midshipman receives a copy of the investigative hearing report and, upon request, a copy of the audio tape of the hearing. However, the accused does not get a copy of the Staff Judge Advocate’s recommendations that is forwarded to the Superintendent. At the Air Force Academy, a verbatim transcript was not made of the proceeding. Convicted cadets were given a summary of the hearing and minutes of the case. A cadet did not receive a copy of the recommendation of his/her Air Officer Commanding. There is no process for a formal, independent appeal of administrative conduct decisions at the Military and Air Force academies. At the Naval Academy, however, a convicted midshipman may request reconsideration of either a finding of guilt or the award of a particular punishment. Each academy does, however, conduct a legal review through its staff judge advocate. At the Military Academy, the Staff Judge Advocate reviews the record of proceedings to determine whether (1) legal requirements have been complied with, (2) any errors that may have been made had a material effect, (3) the findings of the investigating officer are supported by the requisite proof, and (4) the recommendations are supported by the findings. The Staff Judge Advocate may also make recommendations concerning disposition of the case. At the Naval Academy, the legal review is conducted by the Superintendent’s Staff Judge Advocate, after the case has been reviewed by the Commandant. At the Air Force Academy, the Staff Judge Advocate reviewed the case to determine that legal requirements had been met after the Commandant has reviewed the case. A convicted Air Force Academy cadet who had been recommended for separation could elect to have a review by a hearing officer in accordance with Air Force Regulation 53-3 or the Commandant could refer the case to a hearing officer or board of officers. The Academy Board reviewed all cases when cadets were recommended for separation and voted to retain or disenroll the cadet. Cadets who were being considered for disenrollment could submit a written statement with supporting documents to the Academy Board. At the Military Academy, a cadet may be required to state orally what he or she knows about an incident, subject to his or her rights against self-incrimination. A cadet whose conduct is subject to investigation and cadets who are witnesses may decline to answer questions if their statements would tend to incriminate them. For this purpose, self-incrimination involves a situation in which a cadet could be required to admit to a criminal offense. An article 31 rights warning (the right to remain silent) is required in the case of a suspected criminal offense. A cadet is not afforded the right to remain silent merely because he or she is suspected of committing a delinquency under some conduct regulation. As soon as Naval Academy officials know they are dealing with a 6000-level offense, they inform the accused that he/she has the right to remain silent. An accused midshipmen has the right to remain silent at the investigative hearing without any adverse inference being drawn from exercising that right. If, however, the accused makes a statement at the hearing concerning a particular offense, he or she is expected to answer any questions the investigating officer may have concerning that offense. At the Air Force Academy, an accused cadet does not have the right to remain silent when confronted by a superior. When an officer or cadet in the chain of command requests a statement from a cadet, the cadet must provide a statement revealing all information about the incident, including names of cadets or other persons involved, unless the conduct violation(s) in question is to be punished under UCMJ. If during questioning or the investigation of an incident, a cadet reveals information indicating a possible UCMJ violation, all questioning is to be stopped immediately and the cadet is to be informed of his/her legal rights under UCMJ Article 31 (the right to remain silent). Academy officials also stated that a cadet can terminate any interrogation and request legal counsel at any time. At all three academies, incriminating statements are considered valid, even if the individual was denied or not advised of the right to remain silent, since conduct hearings are considered administrative proceedings and rules of evidence do not apply. Failure to grant the accused the right to remain silent will not necessarily result in any confession being excluded as evidence. As noted earlier, the academies consider their honor and conduct systems to be administrative systems. As such, they are essentially similar to nonjudicial disciplinary proceedings for military personnel authorized under UCMJ Article 15. Military law provides for nonjudicial punishment as a means of imposing prompt punishment for minor violations and to correct, educate, and reform offenders in an efficient manner without subjecting them to the stigma that a court-martial would entail. A nonjudicial disciplinary proceeding is not a trial, and a determination of guilt does not constitute a court conviction. Despite the similarities between the objectives of the academy administrative adjudicatory systems and DOD-wide and service objectives regarding nonjudicial disciplinary proceedings, there are several key inconsistencies between the rights given service personnel and the rights accorded academy students under the administrative conduct and honor systems. The inconsistencies, with academy students having less protection, involve the right to be represented by counsel; the right to remain silent; the right to an independent appeal; the maximum length of the punishment of “restriction;” and, in the case of the Military and Air Force academies, the standard of proof used to determine guilt. One major difference between the academy administrative adjudicatory systems and DOD nonjudicial punishment policy involves the right to have counsel appear with the accused and present the case for the accused. The Manual for Courts-Martial states that before nonjudicial punishment may be imposed, the accused servicemember is entitled to appear personally before the administrative authority imposing the nonjudicial punishment. If the accused requests such a personal appearance, he/she is entitled to be accompanied by a spokesperson, who may be a lawyer. This spokesperson may speak for the accused, but may not necessarily question witnesses except as the nonjudicial punishment authority may allow as a matter of discretion. The presence of a lawyer as the personal representative does not make a nonjudicial hearing a formal adversary proceeding; it only gives the accused someone to advise and to speak up for him/her. At the academies, the accused is not entitled to be represented by a spokesperson or lawyer at any administrative conduct or honor hearing. A second difference concerns the right to remain silent. Rule 301 of the Manual for Courts-Martial makes UCMJ, Article 31 (the right to avoid self-incrimination) expressly applicable to nonjudicial punishment. Under the academy administrative conduct systems, students must answer a question that incriminates them, except when they are being charged under UCMJ. A third difference involves the right to an independent appeal. Under article 15, a servicemember who considers the punishment to be unjust or disproportionate to the offense may appeal to the next superior authority. When punishment has been imposed under delegation of a commander’s authority to administer nonjudicial punishment, the appeal must be directed to someone other than the commander who delegated the authority. Since the administrative adjudicatory systems are a delegation of authority from the Superintendent, under the academy adjudicatory systems, only a decision to separate a student with the required review by the service secretary would appear to meet this definition of appeal. A fourth difference involves maximum punishments. UCMJ imposes limitations on article 15 punishments. One of those limitations involves the punishment of “restriction.” The maximum restriction allowed by UCMJ for nonjudicial punishment is 60 days, and then only if the punishment is imposed by an officer with general court-martial jurisdiction or a flag rank officer. At each academy, we found that restriction periods of longer than 60 days have been imposed on students under the administrative conduct systems. The last difference involves standard of proof. For Naval Academy midshipmen, the standard of proof for administrative conduct hearings is the same at the Academy as it is for nonjudicial punishment in the fleet, “preponderance of the evidence.” However, the standard of proof used in Military and Air Force Academy administrative conduct hearings (preponderance of the evidence) is lower than that used for nonjudicial punishment in the active Army and Air Force. The Army has been using the “beyond a reasonable doubt” standard for its nonjudicial punishment cases since 1973. Similarly, Air Force Instruction 51-202, paragraph 3.3, states that the commander must consider whether proof “beyond a reasonable doubt” would be obtainable before initiating action under article 15; if not, it states that action under article 15 is generally not warranted. In its official comments, DOD stated that it saw no clear basis for concluding that protections provided under the administrative conduct systems must parallel nonjudicial disciplinary proceedings. DOD stated that a nonjudicial disciplinary proceeding is a quasi-judicial process established under the UCMJ and the rights that accrue to an offender under the UCMJ are quite specific. Disposition under the academy administrative honor and disciplinary systems, according to DOD are not subject to the same criteria. However, a defense attorney stated that he questions whether the academies have the authority to substitute an administrative disciplinary system that provides less protection for offenders in lieu of legislatively mandated disciplinary system that has the same objectives. We asked questions on our survey pertaining to the conduct rules and disciplinary systems at the academies. Most academy students saw many of the rules and regulations imposed on them as trivial and unrealistic and they believed that the academies should allow students more freedom. A majority of students at the academies perceived that the handling of conduct offenses, the application of rules and regulations, and the disciplinary actions imposed were not consistent. Students appear split regarding whether strict enforcement and punishment are appropriate. Finally, the perceptions of the Class of 1994 Air Force Academy cadets changed very little from their freshman year while those in that class at the Naval and Military academies tended to become increasingly of the opinion that the rules were unreasonable and that discipline was administered inconsistently. The students overwhelmingly indicated that the academies have overregulated them. Most of the students at each academy indicated that (1) many of the academy’s student regulations were trivial and unrealistically restrictive, (2) the academies should allow them more freedom, and (3) their peers did not view the conduct rules and regulations as reasonable. (See fig. 5.1.) The following write-in comments also addressed this overregulation issue. “The problem with the Naval Academy and our sister service academies is that MIDN aren’t given enough responsibility. The feeling here is that we are treated like children for too long. . . We have more restrictions on us than most enlisted folks.” (Naval Academy midshipman) “Too many stupid, useless, and inane regulations. Many of them serve no purpose. Many cause unneeded restrictions on lifestyles.” (Military Academy cadet) “Get rid of all the stupid rules . . . Give cadets more responsibility and authority . . . We might actually surprise you with our performance.” (Air Force Academy cadet) Some comments indicated that the rules were delaying or getting in the way of students being able to mature. “Mids need more freedom from the restrictive rules and regulations so they can make mistakes and learn from them before entering active duty.” (Naval Academy midshipman) “My biggest question since I started here . . . How do midshipmen learn if everything is scheduled and done for them? They are not learning the basics of time management and how to handle their money.” (Naval Academy midshipmen) There may also be some connection between the degree of regulation and the widespread unwillingness to report honor violations. As one midshipman wrote, “I think many of the restrictive and the overloaded schedule breed contempt for the system including, unfortunately, the Honor System.” “become increasingly irritated at the accretion of petty, ’Mickey Mouse’ regulations that, from their perspective, served no useful purpose. The result was not only an increase in the violation of regulations but also creation of an atmosphere in which cadets who violated regulations frequently felt that they were doing nothing wrong. The absence of guilt and the parallel conviction that punishment was undeserved combined to sanction violations of the Honor Code (particularly lying) as a means to avoid getting caught.” Three-quarters or more of the students at each academy indicated that conduct offenses were handled differently across the academy. In addition, they perceived the regulations as not being uniformly applied and that students committing the same offense received different disciplinary actions. (See fig. 5.2.) While about one-third or more of the students believed that strict enforcement was important, about one-third or more disagreed. Similarly, there was little agreement regarding whether disciplinary actions were appropriate to the offense, although from the wording of the question we were unable to determine whether those who believed the punishments were inappropriate saw them as being too harsh or too lenient. There was also considerable disagreement on whether serious conduct offenders should be expelled. (See fig. 5.3.) At the Military and Naval academies, perceptions of the Class of 1994 regarding the conduct systems tended to change from their freshman year to their senior year, while there was little apparent change in perceptions at the Air Force Academy. The responses of the Class of 1994 in their senior year at both the Military and Naval academies showed an increase in the proportion of students who viewed themselves as being overregulated with unreasonable rules and regulations and an increase in the proportion who perceived inconsistent and inappropriate disciplinary actions. At the Naval Academy, there was also an increase in the proportion who saw inconsistent handling of conduct offenses and lack of uniformity in the application of rules and regulations.
Pursuant to a congressional request, GAO reviewed the honor and conduct adjudicatory systems at the Department of Defense (DOD) service academies, focusing on: (1) how the systems at each academy compare; (2) the due process protections of these systems; and (3) the students' attitudes and perceptions toward these systems. GAO found that: (1) although the honor systems at the academies have many similarities, there are some prominent differences among them; (2) the honor codes at the Military and Air Force academies include non-toleration clauses that make it an honor offense to know about an honor offense and not report it, while at the Naval Academy failure to act on a suspected honor violation is a conduct offense; (3) differences also exist in the standard of proof that is used in honor hearings, "beyond a reasonable doubt" used at the Air Force Academy versus "a preponderance of the evidence" used at the other academies; (4) academy honor hearings provide students with the majority of the protections typically associated with procedural due process, with some exceptions and limitations; (5) the most prominent limitations exist on the right to representation by counsel and the right to remain silent and avoid self-incrimination; (6) all three academies impose a limitation on the right to counsel by prohibiting military or civilian lawyers from representing cadets and midshipmen in the hearing itself; (7) the right to remain silent is not granted until the individual is actually charged with an offense; (8) responses to a GAO questionnaire indicated that academy students generally saw their honor systems as fair; (9) in some cases, whether an act constitutes an honor violation is not completely clear because the intent of the accused must be inferred from the investigative and hearing processes; (10) there was considerable reluctance among students to report their fellow students for honor violations; (11) in general, the administrative conduct systems at the Military and Naval academies provide several due process protections, with some exceptions and limitations on others; (12) the Cadet Disciplinary Board proceedings at the Air Force Academy, on the other hand, provided fewer due process protections than proceedings at the other two academies; (13) as of January 1, 1995, the Air Force Academy eliminated the Cadet Disciplinary Board and implemented a two-step process aimed at improving timeliness and fairness in dealing with major conduct offenses; (14) while the conduct systems are characterized by academy officials as administrative, rather than judicial, they offer less due process protection than is mandated across DOD for other nonjudicial disciplinary proceedings; (15) a large majority of the students questioned the reasonableness of many of the minor rules and regulations in the conduct codes; and (16) many students perceive academy handling of conduct offenses, the application of rules and regulations, and the imposition of disciplinary actions as inconsistent.
The CH-53K helicopter mission is to provide combat assault transport of heavy weapons, equipment, and supplies from sea to support Marine Corps operations ashore. The CH-53K is a new-build design evolution of the existing CH-53E and is expected to maintain the same shipboard footprint, while providing significant lift, reliability, maintainability, and cost-of-ownership improvements. Its major improvements include upgraded engines, redesigned gearboxes, composite rotor blades and rotor system improvements, fly-by-wire flight controls, a fully integrated glass cockpit, improved cargo handling and capacity, and survivability and force protection enhancements. It is expected to be able to transport external loads totaling 27,000 pounds over a range of 110 nautical miles under high- hot conditions without refueling and to fulfill land- and sea-based heavy- lift requirements. Sikorsky was awarded a sole-source contract to develop the CH-53K helicopter because, according to the program office, as the developer of the CH-53E, it is the only known qualified source with the ability to design, develop, and produce the required CH-53 variant. The program entered the system development and demonstration phase of the acquisition process in December 2005 and a $3 billion development contract was awarded to Sikorsky in April 2006. Beginning in 2006, the program experienced schedule delays that resulted in cost increases to the development contract. As a result of the schedule delays and cost growth, in 2009 the program office reported a cost and schedule deviation to its original cost and acquisition program baselines to OSD. However, these increases were not significant enough to incur what is commonly referred to as a Nunn- McCurdy breach. In July 2010, the CH-53K program completed what it deemed a successful critical design review (CDR), signaling that it had a stable design and could begin building developmental test aircraft. The program began building the first of five developmental test aircraft in early 2011, plans to make a decision to enter low-rate initial production (LRIP) in 2015, and plans to achieve an initial operational capability (IOC) in 2018. Primarily because of decisions to increase the number of aircraft and other issues, the CH-53K program has experienced approximately $6.8 billion in cost growth and a nearly 3-year delay from original schedule estimates for delivery of IOC. The program started development before determining how to achieve requirements within program constraints, which led to cost growth and schedule delays and resulted in the program delaying its preliminary design review to September 2008, nearly 3 years after development start. In addition, the program received permission to defer three performance capabilities and relax two technical metrics associated with operating and support costs—which we believe are sound acquisition decisions—and will deliver the initial capability to the warfighter in 2018, almost 3 years later than originally planned. In the end, delayed delivery will require the Marine Corps to rely longer on legacy aircraft that are more costly to operate and maintain, less reliable, and less capable of performing the same mission. The CH-53K program’s estimates of cost, schedule, and quantity have significantly grown since development started in December 2005. The Marine Corps now plans to buy a total of 200 CH-53K helicopters for an estimated $25.5 billion, a 36 percent increase over its original estimates. The majority of this increase is due to added quantities. The program’s schedule delays have increased the development cost estimate by over $1.7 billion, or more than 39 percent. In 2008, the Marine Corps directed the program to increase its total quantity estimate from 156 to 200 aircraft to support an increase in strength from 174,000 to 202,000 Marines. In February 2011, the Secretary of Defense testified that the number of Marine Corps troops may decrease by up to 20,000 Marines beginning in fiscal year 2015. The Marine Corps has assessed the required quantity of aircraft and determined that the requirement for 200 aircraft remains valid despite the proposed manpower decrease. Primarily as a result of the aircraft quantity increase, the program’s procurement cost estimate has also increased by over $5 billion, or 35 percent, from nearly $14.4 billion to over $19.4 billion. The program’s average procurement unit cost has increased 4.8 percent. In addition, the program’s schedule delays have delayed its ability to achieve IOC until 2018, nearly 3 years later than originally planned. Table 1 compares the program’s original baseline estimates of cost, quantity, and major schedule events to current program estimates. The program started development before determining how to achieve requirements within program constraints, which led to cost growth and schedule delays. The CH-53K program originally scheduled its preliminary design review for June 2007, a year and a half after the program began development, and later delayed it to September 2008, nearly 3 years after development start. We have reported that performing systems engineering reviews—including a system requirements review, system functional review, and preliminary design review—before a program is initiated and a business case is set is critical to ensuring that a program’s requirements are defined and feasible and that the design can meet those requirements within cost, schedule, and other system constraints. Problems with systems engineering began immediately within the program because the program and Sikorsky disagreed on what systems engineering tasks needed to be accomplished. As a result, the bulk of the program’s systems engineering problems related to derived requirements. According to an OSD official, the contractor did not account for total design workload, technical reviews, and development efforts. For example, the program experienced problems defining software specifications for its Avionics Management System. While Marine Corps officials commented that requirements are often difficult to define early in the engineering process and changes are expected during design maturation, they noted that in this case the use of a firm fixed-price contract with the subcontractor made it difficult to facilitate changes. As a result, completing this task took longer than the program had estimated and the program’s CDR was delayed. In another example, the program has a requirement that the CH-53K be transportable by C-5 aircraft. As with the CH-53E, because of its size, the CH-53K’s rotor and main gearbox will be removed from the aircraft’s body in order to fit within the height requirements of a C-5. The program office interpreted this as requiring that each CH-53K be shipped in its entirety on a single C-5 aircraft, including the removed rotor and gearbox. However, the contractor interpreted the requirement differently and proposed shipping all rotors and main gearboxes in another C-5 separate from the CH-53K body. Program officials did not accept this interpretation of the requirement and required the contractor to propose a solution in which each CH-53K aircraft would be shipped and arrive in its entirety in a single C-5 aircraft. Marine Corps officials commented that even though this requirement was interpreted differently, it was identified early in the systems engineering process and addressed. The program office and contractor underestimated the time it would take to hire its workforce, and delays in awarding subcontracts made it difficult for the program to complete design tasks and maintain its schedule. According to an OSD official, while the program officially began development in December 2005, the development contract was not awarded until 4 months later—in April 2006—delaying development start. According to program officials, budget-driven hiring restrictions for government personnel, which included ceilings on the number of government personnel who could be assigned to the program management office, affected the program’s ability to hire its workforce at the time the program was initiated. Similarly, program officials told us that the contractor underestimated the amount of time required to locate, recruit, train, and assign qualified personnel to the program. The contractor was also late in awarding contracts to its major subcontractors. To mitigate the risk of production cost growth, the contractor established long-term production agreements with its subcontractors. According to program officials, in these agreements subcontractors committed in advance to pricing arrangements for the production of parts and spares. While the contractor used this strategy to reduce program risk, it resulted in a delay and the major subcontracts were awarded later than needed to maintain the program’s initially planned schedule. In 2010, the CH-53K program received approval from the Joint Requirements Oversight Council (JROC) to defer three performance capabilities that make up a portion of the Net-Ready key performance parameter, and from the Marine Corps to relax two maintenance-based technical performance metrics—both of which we believe are sound acquisition decisions. The Department of Defense’s (DOD) decision to defer three performance capabilities was based on consultation among JROC, Headquarters U.S. Marine Corps, Chief of Naval Operations staff, and the program office in 2008, which prompted the CH-53K program office to review the program’s requirements and identify potential areas in which to decrease costs. As part of that review, the program office identified several areas where costs could be deferred without decreasing capability, including three communications-related performance capabilities—Link-16, Variable Message Format, and Mode V software— that constituted part of the Net-Ready key performance parameter. Program officials estimated that this will result in over $100 million in cost deferral. Program officials explained that these software capabilities were not removed from the program’s road map, but rather have been deferred until after IOC. Originally, the program’s Operational Requirements Document called for all three capabilities to be fully integrated in fiscal year 2015. However, one of the capabilities must now be fully integrated no later than 6 months after IOC, which is currently scheduled to occur in 2018, and the other two capabilities must be fully integrated within 2 years of IOC. Program officials stated that deferment of these capabilities will not affect aircraft interoperability. Two technical performance metrics were changed because, according to program officials, meeting the original maintenance-based technical performance requirements for Mean Time To Repair and Mean Corrective Maintenance Time for Operational Mission Failures was not cost effective. For example, the CH-53K’s rotor blades are designed to have a two-piece design featuring a removable tip. However, the curing time to adhere the blade tip to the blade was driving up the time it would take to remove and replace the blade tip. The contractor proposed meeting the original requirement by moving to a one-piece blade; however, this would increase the program’s operating and support costs by approximately $99 per flight hour and increase the logistical footprint of the helicopter. As a result, the program sought and received approval to relax the performance metric associated with replacing the blade tip instead of investing the financial resources necessary to obtain the original metrics or moving to a one-piece blade. Because of a nearly 3-year delay in initial delivery of the CH-53K, program officials estimated that it will cost approximately $927 million more to continue to maintain the CH-53E legacy system. Initial delivery of the CH- 53K to the warfighter is currently scheduled for 2018, a delay of almost 3 years that will require the Marine Corps to rely on legacy aircraft that are less reliable, more costly to operate and maintain, and less capable of performing the same mission. This delay, coupled with an increased demand for the CH-53E in foreign theaters, led the Marine Corps to pull all available assets from retirement for either reentry into service or to be used for spare parts. Continued reliance on the CH-53E will be costly, as it is one of the most expensive helicopters to maintain in the Marine Corps’s fleet. For example, the drive train of the CH-53E costs approximately $3,000 per flight hour to maintain. In contrast, the program estimates that the drive train for the CH-53K—its largest dynamic system—will cost only $1,000 per flight hour to maintain. In addition, the CH-53K is expected to have improved reliability and maintainability over the CH-53E legacy system. For example, the CH-53K’s engine has 60 percent fewer parts than that of the CH-53E, which the program office believes will result in a more reliable engine that is easier and less costly to maintain. In addition, the CH-53K incorporates an aluminum gearbox casing, which will decrease the need for replacement resulting from corrosion. Delayed delivery of the CH-53K will also affect the ability of the Marine Corps to carry out future missions that cannot be performed by the CH- 53E. For example, the CH-53E can carry 15,000 pounds internally compared to 30,000 pounds for the CH-53K. While the CH-53K is expected to carry up to 27,000 pounds externally for 110 nautical miles at 91.5°F at an altitude of 3,000 feet—a Navy operational requirement for high-hot conditions—the CH-53E can only carry just over 8,000 pounds under the same conditions. The increased lift capability of the CH-53K during these conditions may enable it to carry the current and incoming inventory of up-armored vehicles, which are much heavier than their less-armored predecessors. For example, the up-armoring of wheeled military vehicles, such as the High Mobility Multi-purpose Wheeled Vehicle, and the introduction of the Joint Light Tactical Vehicle have resulted in a military inventory with weights that are beyond the weight limits of the CH-53E. According to program officials, without the addition of the CH-53K, the Marine Corps will soon no longer be able to carry and deliver the military’s new inventory of wheeled vehicles in high-hot conditions. Figure 1 compares the capabilities and characteristics of the CH-53E and CH-53K. The combination of the increase in the quantity of heavy-lift helicopters required to support Marine troop levels and the delayed delivery of the CH-53K to the warfighter has created a requirement gap for heavy-lift helicopters of nearly 50 helicopters (nearly 25 percent) over the next 7 years and represents an operational risk to the warfighter. However, the Marine Corps stated that it is accepting significant risk with the heavy-lift shortfall and will continue to operate under this gap until the CH-53K becomes available. Figure 2, which shows the required aircraft quantities, the current CH-53 series helicopter force structure, and planned CH-53K production, illustrates the operational risk. The CH-53K program has made progress addressing the difficulties it faced early in system development. The program held CDR in July 2010, demonstrating that it has the potential to move forward successfully. The program has also adopted mitigation strategies to address future program risk. The program’s new strategy, as outlined in the President’s fiscal year 2012 budget, lengthens the development schedule, increases development funding, and delays the production decision by 1 year. However, while the program’s new acquisition strategy increases development time to mitigate risk, some testing and production activities remain concurrent, which could result in costly retrofits if problems are discovered during testing. The CH-53K program has taken several steps to address some of the shortfalls that the program experienced early in development. For example, the program has addressed its cost growth by revising its cost estimate to align with the current schedule. The program’s 2011 budget request fully funded the development program to its revised estimate. The program addressed its early staffing issues by increasing staffing levels beginning in January 2009 and maintained those levels through completion of CDR. In addition, the program delayed technical reviews until it was prepared to move forward, thereby becoming more of an event-driven rather than a schedule-driven program. An event-driven approach enables developers to be reasonably certain that their products are more likely to meet established cost, schedule, and performance baselines. For instance, the program delayed CDR—a vehicle for making the determination that a product’s design is stable and capable of meeting its performance requirements—until all subsystem design reviews were held and more than 90 percent of engineering designs had been released. In July 2010, the program completed system integration—a period when individual components of a system are brought together—culminating with the program’s CDR. With completion of CDR, the program has demonstrated that the CH-53K design is stable—an indication that it is appropriate to proceed into fabrication, demonstration, and testing and that it is expected that the program can meet stated performance requirements within cost and schedule. At the time CDR was held, the program had released 93 percent of its engineering drawings, exceeding the best practice standard for the completion of system integration. According to best practices, a high percentage of design drawings—at least 90 percent—should be completed and released to manufacturing at CDR. Additionally, the program office stated that all 29 major subsystem design reviews were held prior to the start of CDR, and that coded software delivery was ahead of schedule. In the end, the Technical Review Board, the approving authority for CDR, determined that the program was ready to transition to system demonstration—a period when the system as a whole demonstrates its reliability as well as its ability to work in the intended environment—and identified seven action items, none of which were determined by the program office to be critical. The program has also adopted several mitigation strategies to address future program risk. The program has established weight improvement plans to address risks associated with any potential weight increases and has been able to locate areas where weight reductions can be made. For example, the program worked with the subcontractor responsible for designing and manufacturing the floor of the CH-53K to find areas to reduce weight. The program has also created several working groups to reduce risk to the overall capabilities of the CH-53K. For example, the Capabilities Integrated Product Team, which meets on a monthly basis, was developed to focus on risk relating to the program’s requirements. This team comprises officials from the program office; Headquarters U.S. Marine Corps; Marine Corps Combat Development Command; Chief of Naval Operations staff; the Navy’s Commander, Operational Test and Evaluation Force, staff; the operational testing squadron; and the developmental testing squadron. Its members work with the program office to identify, clarify, and resolve mission-related issues and program requirements. In addition, the program holds integrating design reviews every 6 months, freezing the working design in order to hold a system-level review and manage design risk. The CH-53K program’s schedule contains overlap, or concurrency, between testing and production. The stated rationale for concurrency is to introduce systems in a timelier manner or to fulfill an urgent need, to avoid technology obsolescence, to maintain an efficient industrial development/production workforce, or a combination of these. While some concurrency may be beneficial to efficiently transition from development to production, there is also risk in concurrency. Any changes in design and manufacturing that require modifications to delivered aircraft or to tooling and manufacturing processes would result in increased costs and delays in getting capabilities to the warfighter. In the past, we have reported a number of examples of the adverse consequences of concurrent testing and delivery of systems and how concurrency can place significant investment at risk and increases the chances that costly design changes will surface during later testing. The CH-53K program’s original schedule contained concurrency between testing and aircraft production. In 2009, reflecting the early difficulties experienced in development, the CH-53K program revised its cost and schedule estimates. This revised schedule would have reduced the program’s level of concurrency. For example, while the original program schedule called for developmental testing to be ongoing during the production of all three lots of LRIP, the schedule resulting from the 2009 adjustments called for developmental testing to be ongoing during the first two lots of LRIP. However, the program had concerns that this schedule’s allowance of approximately 2 years between final delivery of developmental test aircraft and the beginning of LRIP would create a production gap that could be costly. As a result, the program office was considering accelerating procurement funds in an effort to begin production 1 year earlier than planned and minimize breaks in production. This consideration was negated, however, as a result of a funding cut that the program sustained in the process of formulating the President’s fiscal year 2012 budget. In February 2011, the President’s fiscal year 2012 budget was released and outlined changes to the program’s budget and schedule. According to a program official, the program’s requested budget was reduced by approximately $30.5 million in fiscal year 2012 (and a total of $94.6 million between fiscal year 2010 and fiscal year 2015)—funds to be applied to other DOD priorities. The President’s budget reports that while the CH- 53K program was fully funded to the OSD Cost Assessment and Program Evaluation Office estimate in the President’s fiscal year 2011 budget, the funding adjustments made to the program in the President’s fiscal year 2012 budget would result in a net increase of $69 million to the development cost estimate and a schedule delay of approximately 7 months. The new schedule results in later delivery of developmental test aircraft and delays some testing. As a result, according to program officials, the production gap issue has been addressed. Another result, though, is that the program’s new schedule maintains a level of concurrency similar to that of the original schedule. Program officials have conceded that concurrency exists within their program, but state that this concurrency will reduce the operational risk of further delaying IOC. In commenting on the risks of concurrency, Marine Corps officials noted that the time allotted prior to the start of production and the small quantity of LRIP planned reduces the risks of costly retrofits resulting from issues identified during developmental test. Figure 3 compares the CH-53K program’s original and new schedules. LRIP Lot 1 (6) LRIP Lot 2 (9) LRIP Lot (14) Fll rte prodction (FRP) Lot 4-9 (127) LRIP Lot 1 (6) LRIP Lot 2 (9) LRIP Lot (14) FRP (171) As the CH-53K program moves forward, it is important that further cost growth and schedule delays are mitigated. The CH-53K program’s new acquisition strategy addresses previous programmatic issues that led to early development cost growth and schedule delays. DOD provided technical comments on the information in this report, which GAO incorporated as appropriate, but declined to provide additional comments. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretary of the Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix II. To determine how the CH-53K’s estimates of cost, schedule, and quantity have changed since the program began development, we received briefings by program and contractor officials and reviewed budget documents, annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data. We compared reported progress with the program of record and previous years’ data, identified changes in cost and schedule, and obtained officials’ reasons for these changes. We interviewed officials from the CH-53K program and the Department of Defense (DOD) to obtain their views on progress, ongoing concerns, and actions taken to address them. To identify the CH-53K’s current acquisition strategy and determine how this strategy will meet current program targets as well as the warfighter’s needs, we reviewed the program’s acquisition schedule and other program documents, such as Selected Acquisition Reports and test plans. We analyzed the retirement schedule of the legacy CH-53E fleet and discussed the impact of these retirements on the Marine Corps’s heavy-lift requirement with appropriate officials. To identify the CH-53K program’s current acquisition strategy and to determine how the program plans to meet its new targets and still meet the needs of the warfighter, we obtained from the program—through program documents—the program’s revised acquisition plans. In performing our work, we obtained documents, data, and other information and met with CH-53K program officials at Patuxent River, Maryland, and the prime contractor, Sikorsky Aircraft Corporation, at Stratford, Connecticut. We met with officials from Headquarters Marine Corps, the Office of the Chief of Naval Operations, and the Office of the Secretary of Defense’s Cost Assessment and Program Evaluation Office at the Pentagon, Arlington, Virginia. We interviewed officials from the Office of Director of Defense Research and Engineering and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Office of Developmental Testing and Evaluation, in Arlington, Virginia. We also met with officials from the Defense Contract Management Agency who were responsible for the CH-53K program at Stratford, Connecticut. We drew on prior GAO work related to acquisition best practices and reviewed analyses and assessments done by DOD. To assess the reliability of DOD’s cost, schedule, and performance data for the CH-53K program, we talked with knowledgeable agency officials about the processes and practices used to generate the data. We determined that the data we used were sufficiently reliable for the purpose of this report. We conducted this performance audit from February 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff members made key contributions to this report: Bruce Thomas, Assistant Director; Noah Bleicher; Marvin Bonner; Laura Greifner; Laura Jezewski; and Robert Miller.
The United States Marine Corps is facing a critical shortage of heavy-lift aircraft. In addition, current weapon systems are heavier than their predecessors, further challenging the Marine Corps's current CH-53E heavy-lift helicopters. To address the emerging heavy-lift requirements, the Marine Corps initiated the CH-53K Heavy Lift Replacement program, which has experienced significant cost increase and schedule delays since entering development in 2005. This report (1) determines how the CH-53K's estimates of cost, schedule, and quantity have changed since the program began development and the impact of these changes and (2) determines how the CH-53K's current acquisition strategy will meet current program targets as well as the warfighter's needs. To address these objectives, GAO analyzed the program's budget, schedules, acquisition reports, and other documents and interviewed officials from the program office, the prime contractor's office, the Marine Corps, the Defense Contract Management Agency, and the Office of the Secretary of Defense. The CH-53K helicopter mission is to provide combat assault transport of heavy weapons, equipment, and supplies from sea to support Marine Corps operations ashore. Since the program began development in December 2005, its total cost estimate has grown by almost $6.8 billion, from nearly $18.8 billion to over $25.5 billion as a result of a Marine Corps-directed quantity increase from 156 to 200 aircraft and schedule delays. The majority of the program's total cost growth is due to added quantities. Development cost growth and schedule delays resulted from beginning development before determining how to achieve requirements within program constraints, with miscommunication between the program office and prime contractor about systems engineering tasks and with late staffing by both the program office and the contractor. The program has also deferred three performance capabilities and relaxed two maintenance-based technical performance metrics in an effort to defer cost. Delivery of the CH-53K to the warfighter is currently scheduled for 2018--a delay of almost 3 years. The CH-53K program has made progress addressing the difficulties it faced early in system development. It held a successful critical design review in July 2010 and has adopted mitigation strategies to address future program risk. The program's new strategy, as outlined in the President's fiscal year 2012 budget, lengthens the development schedule, increases development funding, and delays the production decision. However, adjustments made to the budget submitted to Congress reduce the program's fiscal year 2012 development funding by $30.5 million (and by a total of $94.6 million between fiscal years 2010 and 2015). According to information contained in the budget, this reduction would result in additional schedule delays to the program of approximately 7 months and a net increase of $69 million to the total development cost estimate. The CH-53K program's new acquisition strategy addresses previous programmatic issues that led to early development cost growth and schedule delays.
Over the past year, VA has clearly benefited from the commitment of the secretary and other top leaders to addressing critical weaknesses in the department’s management of information technology. As a result of their leadership, VA has made important strides in raising corporate awareness of the department’s needs and in articulating and acting upon a vision for achieving improvements in key areas of IT performance. Despite this progress, however, many aspects of VA’s IT environment remain troublesome, and our message today reflects concerns that we have long viewed as significant impediments to the department’s effective use of IT to achieve optimal agency performance. As such, VA has more work to accomplish before it can point to real improvement in overall program performance and be assured that it has a stable, reliable, and modernized systems environment to effectively support critical agency decisionmaking and operations. In an area of growing importance, VA has taken key steps in laying the groundwork for an integrated, departmentwide enterprise architecture—a blueprint for evolving its information systems and developing new systems that optimize their mission value. Crucial executive support has been established and the department has put in place a strategy to define products and processes that are critical to its development. VA is also currently recruiting a chief architect to assist in implementing and managing the enterprise architecture. Significant work, nonetheless, is still required before the department will have a functioning enterprise architecture in place for acquiring and utilizing information systems across VA in a cost-effective and efficient manner. VA’s success in developing, implementing, and using a complete and enforceable enterprise architecture hinges upon continued attention to putting in place a sound program management structure—including a permanent chief architect and an established program office—to facilitate, manage, and advance this effort and to be held accountable for its success. In addition, VA must continue to take steps to identify and collect crucial information describing essential business functions, information flows, strategic plans, and requirements, and produce a well-thought-out sequencing plan that considers management and organizational changes and business goals and operations. Success also hinges on having proactive management focused on ensuring that investment management and systems development and acquisition are closely linked with the enterprise architecture processes. This integration must be done in a manner that best suits the agency’s particular organization, culture, and internal management practices. Information security management is another area in which VA has taken important steps to strengthen its department-level program, including mandating information security performance standards and, thus, greater management accountability for senior executives. It has also updated security policies, procedures, and standards to guide the implementation of critical security measures. However, VA continues to report pervasive and serious information security weaknesses. Thus far, its actions toward establishing a comprehensive computer security management program have not been sufficient to ensure that the department can protect its computer systems, networks, and sensitive veterans health care and benefits data from unnecessary exposure to vulnerabilities and risks. Moreover, VA’s current organizational structure does not ensure that the cyber security officer can effectively oversee and enforce compliance with security policies and procedures that are being implemented throughout the department. Beyond these two key areas of IT management concern, VA and its administrations also have continued to pursue several critical information systems investments that have consumed substantial time and resources, with mixed success. For example, after about 16 years and at least $335 million spent on modernization, the Veterans Benefits Administration (VBA) is still far from a modernized system to replace its aging benefits delivery network, needed to more effectively support its compensation and pension and other vital benefits payment processes. VBA has not adequately addressed several longstanding concerns related to project management, requirements development, and testing—all of which raise uncertainty about whether the ongoing veterans service network (VETSNET) project will deliver a cost-effective solution with measurable and specific program-related benefits. Conversely, the Veterans Health Administration’s (VHA) managers and clinicians have made good progress in expanding their use of the decision support system (DSS) to facilitate clinical and financial decisionmaking. The use of DSS data for the fiscal year 2002 resource allocation process and a requirement that veteran integrated service network directors better account for their use of this system have both raised awareness of and promoted its utility among VHA facilities. Moreover, VHA has begun steps to further improve the accuracy and timeliness of DSS data. As VHA-wide usage of DSS progresses, sustained top management attention will be crucial to ensuring the continued success of this system. Lastly, VA has achieved limited progress in its joint efforts with the Department of Defense and Indian Health Service to create an interface for sharing data in their health information systems, as part of the government computer-based patient record initiative. Strategies for implementing the project continue to be revised, its scope has been substantially narrowed, and it continues to operate without clear lines of authority or comprehensive, coordinated plans. Consequently, the future success of this project remains uncertain, raising questions as to whether it will ever fully achieve its original objective of allowing health care professionals to share clinical information via a comprehensive, lifelong medical record. One of VA’s most essential yet challenging undertakings has been developing and implementing an enterprise architecture to guide the department’s IT efforts. An enterprise architecture—a blueprint for systematically and completely defining an organization’s current (baseline) operational and technology environment and a roadmap toward the desired (target) state—is an essential tool for effectively and efficiently engineering business processes and for implementing their supporting systems and helping them evolve. Office of Management and Budget (OMB) guidelinesrequire VA and other federal agencies to develop and implement enterprise architectures to provide a framework for evolving or maintaining existing and planned IT. Guidance issued last year by the Federal CIO Council in collaboration with us further emphasizes the importance of enterprise architectures in evolving information systems, developing new systems, and inserting new technologies that optimize an organization’s mission value. As this subcommittee is well aware, VA has been attempting to develop an enterprise architecture for several years, but without much overall success. Our prior reports and testimony have documented how VA’s previous attempts have fallen short of their intended purpose and did not reflect an approach that would result in an integrated, departmentwide blueprint. For example, VA’s earlier strategy had called for each of its administrations—VBA, VHA, and the National Cemetery Administration— to develop its own logical architecture, which likely would not have resulted in the department’s having an integrated architecture, but rather, at least three separate, unrelated architectures. In addition, VA’s common business lines had not been adequately involved in prior attempts to develop an architecture. In July 1998 and August 2000, respectively, we recommended that VA take actions to develop a detailed implementation plan with milestones for completing an integrated, departmentwide architecture, and that it include VA business owners in its architecture development. After assuming office last year, VA’s secretary vowed to take action to address the inadequacies in the department’s approach. Over the past year, VA has made progress in taking specific actions to lay the groundwork for its enterprise architecture. Its most recent set of activities closely adhere to the Federal CIO Council’s suggested guidance on managing the enterprise architecture program. By effectively implementing an enterprise architecture, VA stands to realize a number of important and tangible benefits. For example, an enterprise architecture can capture facts about the department’s mission, functions, and business foundation in an understandable manner to promote better planning and decisionmaking; improve communication among the department’s business organizations and IT organizations through a standardized vocabulary; and provide architectural views that help communicate the complexity of VA’s large systems and facilitate management of its extensive, complex environments. Overall, effective implementation of an enterprise architecture can facilitate VA’s IT management by serving to inform, guide, and constrain the decisions being made for the department, and subsequently decreasing the risk of buying and building systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. As depicted in figure 1, developing, implementing, and maintaining an enterprise architecture is a dynamic, iterative process of changing the enterprise over time by incorporating new business processes, new technology, and new capabilities. Depending on the size of the agency’s operations and the complexity of its environment, enterprise architecture development and implementation requires sustained attention to process management and agency action over an extended period of time. Moreover, once implemented, the enterprise architecture requires regular upkeep and maintenance to ensure that it is kept current and accurate. Periodic reassessments are necessary to ensure that the enterprise architecture remains aligned with the department’s strategic mission and priorities, changing business practices, funding profiles, and technology innovation. A prerequisite to development of the enterprise architecture is sustained sponsorship and strong commitment achieved through buy-in of the agency head, leadership of the CIO, and early designation of a chief architect. Further, the establishment of an architectural team is necessary to define an agency-specific architectural approach and process. The cycle for completing an enterprise architecture highlights the need for constant monitoring and oversight of architectural activities and progress, and for architecture development teams to work closely with agency business line executives to produce a description of the agency’s operations, a vision of the future, and an investment and technology strategy for accomplishing defined business goals. The architecture is maintained through continuous modification to reflect the agency’s current baseline and target business practices, organizational goals, vision, technology, and infrastructure. In initiating its enterprise architecture process, VA has applied key principles of the Federal CIO Council’s guidance and has put in place some core elements of the council’s enterprise architecture framework. For example, in the area of executive commitment, the department has obtained crucial buy-in and support from the secretary, department-level CIO, and other senior executives and business teams; this is essential to raising awareness of and leveraging participation in developing the architecture. As evidence of his commitment, last April the secretary established a team made up of VA senior management business line and information technology professionals to develop an enterprise architecture strategy. The team met on weekends over the course of about 60 days and, in August 2001, issued an executive enterprise architecture strategy that articulates the department’s policy and principles governing the development, implementation, and maintenance of VA’s enterprise architecture. VA is in the process of establishing committees to manage, control, and monitor activities and progress in fully developing and implementing its enterprise architecture. For example, VA’s information technology board has begun functioning as the department’s enterprise architecture executive steering committee, with responsibility for directing, overseeing, and approving core elements and actions of the enterprise architecture program. As part of VA’s actions to develop and advance its enterprise architecture, it has also chartered an enterprise architecture council— which when activated—is expected to assist in developing project priorities and performing management reviews and evaluations of IT project proposals. In addition, VA is in the process of establishing an enterprise architecture program management office and, over the last 8 months, has been recruiting a permanent chief architect to provide overall leadership and guidance for the enterprise architecture program. These management entities are essential for ensuring that the department’s IT investments are aligned with the enterprise architecture and optimize the interdependencies and interrelationships among business operations and the underlying IT that supports them. Further, as part of its enterprise architecture strategy, VA has chosen a highly recognized enterprise architecture framework that will be used to organize the structure of the architecture. To facilitate its selection of a framework, VA consulted with experts from the private sector and borrowed lessons learned from officials involved in architecture development at other federal agencies. VA has begun defining its current architecture, an important step for ensuring that future progress can be measured against such a baseline, and is also developing its future (target) telecommunications architecture. In addition, to assist in the management of new IT initiatives, VA is considering using a system that it has designed to link the management of its enterprise architecture program to the department’s capital planning and project management. It is also considering using a Web-based tool that it has designed to collect data on business rules, requirements, and processes that will be integrated into the enterprise architecture management process. While VA has taken several important steps forward, it is important to note that the department has many more critical work steps ahead in implementing and managing its enterprise architecture. Using the Federal CIO Council’s enterprise architecture guide as a basis for analysis, table 1 illustrates some key steps that have been accomplished, along with examples of the many critical actions VA must still address to implement and sustain its enterprise architecture program. Accomplishing these remaining steps will require continued and substantial time, effort, and commitment. Among the key activities requiring immediate attention is establishment of a program management office headed by a permanent chief architect to manage the development and maintenance of the enterprise architecture. VA has begun establishing such an office and is currently recruiting a chief architect. However, until the department has an office that is fully staffed with experienced architects and hires a chief architect with the requisite core competencies, it will continue to lack the management and oversight necessary to ensure the success of its enterprise architecture program. Further, until the department has completed an implementation plan that delineates how it will develop, use, and maintain the enterprise architecture, it will lack definitive guidance for effectively managing the enterprise architecture program. Further, a lot of work lies ahead related to VA’s efforts toward developing its baseline and target architectures. A crucial first step in building the enterprise architecture is identifying and collecting existing products that describe the agency as it exists today and as it is intended to look and operate in the future. While VA has developed a baseline application inventory to describe its “as is” state, it has not yet completed validating the inventory, or completed detailed application profiles for the inventory, including essential information such as business functions, information flows, and external interface descriptions. Similarly, to define its vision of future business operations and supporting technology, VA must still collect crucial information for its target architecture, including information on its proposed business processes, strategic plans, and requirements. Beyond these planning and development activities, VA will also have to ensure the successful transition and implementation of its enterprise architecture. Evolving the agency from its baseline to the target architecture will require concurrent, interdependent activities and incremental development. As such, VA will need to develop and maintain a sequencing plan to provide a step-by-step approach for moving from the baseline to the target architecture. Development of this sequencing plan should consider a variety of factors, including sustaining of operations during the transition, anticipated management and organizational changes, and business goals and operational priorities. Ultimately, VA’s success in using the architecture will depend on active management and receptive project personnel, along with effective integration of the enterprise architecture process with other enterprise life cycle processes. A key aspect of VA’s enterprise architecture program is the integration of security practices into the enterprise architecture. The CIO Council has articulated guidelines for doing so. For example, the architecture policy should include security practices and the architecture team should include security experts. In its enterprise architecture strategy document, VA has committed to including security in all elements of its enterprise architecture. Further, VA’s executive-level security officer served as a member of its architecture team. As VA moves forward in developing, implementing, and using its enterprise architecture, we would expect it to include information security details relating to the design, operations, encryption, vulnerability, access, and use of authentication processes. A commitment to building information security into all elements of its enterprise architecture program is essential to helping VA meet the challenges that it faces in protecting its information systems and sensitive data. As VA moves forward with its enterprise architecture management program, it should ensure that remaining critical process steps outlined in the federal CIO guidance are sufficiently addressed and completed within reasonable timeframes. With the enhanced management capabilities provided by an enterprise architecture framework, VA should be able to (1) better focus on the strategic use of emerging technologies to manage its information, (2) achieve economies of scale by providing mechanisms for sharing services across the department, and (3) expedite the integration of legacy, migration, and new systems. Information security continues to be among the top challenges that the department must contend with. As you know, in carrying out its mission, VA relies on a vast array of computer systems and telecommunications networks to support its operations and store the sensitive information that it collects related to veterans’ health care and benefits. VA’s networks are highly interconnected, its systems support many users, and the department is increasingly moving to more interactive, Web-based services to better meet the needs of veterans. Effectively securing these computer systems and networks is critical to the department's ability to safeguard its assets, maintain the confidentiality of sensitive veterans’ health and disability benefits information, and ensure the reliability of its financial data. Mr. Chairman, when we last testified, VA had just established a department-level information security management program and hired an executive-level official to head it.VA had also finalized an information security management plan to provide a framework for addressing longstanding departmentwide computer security weaknesses. However, as our testimony noted, the department had not implemented key components of a comprehensive, integrated security management program that are essential to managing risks to business operations that rely on its automated and highly interconnected systems. This condition existed despite our previous recommendation that VA effectively implement and oversee its computer security management program through assessing risks, implementing policies and controls, promoting awareness, and evaluating the effectiveness of information system controls at its facilities. As with its enterprise architecture, the Secretary expressed his intent to implement measures that would remedy existing deficiencies in the department’s security program. U.S. General Accounting Office, VAInformationSystems:ComputerSecurityWeaknessesPesistat r theVeteransHealthAdministration, GAO/AIMD-00-232 (Washington, D.C.: September 8, 2000). security reform legislation revealed that the department had not implemented effective information security controls for many of its systems and major applications. Last October, VA’s inspector general also reported that it had found significant problems related to the department’s control and oversight of access to its systems, including that VA had (1) not adequately limited the access of authorized users or effectively managed user identifications and passwords, (2) not established effective controls to prevent individuals from gaining unauthorized access to its systems, (3) not provided adequate physical security to its computer facilities, and (4) not updated and tested disaster recovery plans to ensure continuity of operations in the event of a disruption in service. Many of these access and other general control weaknesses mirror deficiencies we have reported since 1998, and that VA’s inspector general continues to report as a material weakness in the department’s internal controls. Based largely on weaknesses of this type, last fall the House Government Reform Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations gave VA a failing grade in computer security. The government information security reform provisions of the fiscal year 2001 Defense Authorization Act (P.L. 106-398) require annual agency program reviews and annual independent evaluations for both non-national security and national security information systems. critical elements of information systems control that are defined in our information system controls audit methodology. Further, the department has adopted the National Institute of Standards and Technology’s federal information technology security assessment framework to use in determining the current status of these controls and measuring the progress of information security program improvements. The cyber security officer also recently revised the department’s security management plan to update security policies, procedures, and technical standards. The updated plan outlines actions for developing risk-based security assessments, improving the monitoring and testing of systems controls, and implementing departmentwide virus-detection software and intrusion-detection systems. The plan places increased emphasis on centralizing key security functions that previously were decentralized or nonexistent, including virus detection, systems certification and accreditation, network management, configuration management, and incident and audit analysis. Yet even with this positive direction, VA’s actions do not fully address remaining problems, and are inadequate to cover the breadth of matters essential to a comprehensive security management program. Our 1998 report on effective security management practices used by several leading public and private organizations and a companion report on risk-based security approaches in 1999 identified key principles that can be used to establish a management framework for more effective information security programs. This framework is depicted in figure 2. The leading organizations we examined applied these principles to ensure that information security addressed risks on an ongoing basis. Further, these have been cited as useful guidelines for agencies by the Federal CIO Council and incorporated into the council’s information security assessment framework, intended for agency self-assessments. Using our information security risk management framework as criteria, table 2 summarizes both the actions that VA has taken and those still needed to ensure that it has a comprehensive computer security management program. As shown, while VA has completed a number of important steps, its efforts in each of the five key areas of effective computer security program management—central security management, security policies and procedures, risk-based assessments, security awareness, and monitoring and evaluation—have not yet included key actions that are essential for successful and effective program implementation. As the table illustrates, VA’s security management program continues to lack essential elements required to protect the department’s computer systems and networks from unnecessary exposure to vulnerabilities and risks. For example, while VA has begun to develop an inventory of known security weaknesses, it continues to be without a comprehensive, centrally managed process that will enable it to identify, track, and analyze all computer security weaknesses. Further, the updated security management plan does not articulate critical actions that VA will need to take to correct specific control weaknesses or the time frames for completing key actions. While the plan calls for monitoring VA’s computer control environment to ensure compliance, the plan does not provide a framework to guide the monitoring activities by, for example, identifying the specific security areas to be reviewed, the scope of compliance work to be performed, the frequency of reviews, reporting requirements, or the resolution of reported issues. VA also lacks a mechanism for collecting and tracking performance data, ensuring management action as needed and, when appropriate, providing independent validation of program deliverables. Without these essential elements, VA will have only limited assurance that its financial information and sensitive medical records are adequately protected from unauthorized disclosure, misuse, or destruction. Accordingly, as VA continues to improve upon its information security management, it should move expeditiously to address the gaps we are highlighting in table 2. In commenting on the department’s current security posture, VA’s cyber security officer stated that efforts are planned or underway to address the actions not yet completed. He added that by August 31, 2002, the department expects to have a plan for completing all of the necessary corrective actions. While VA is clearly placing greater emphasis on its information security, its cyber security officer will be challenged to manage the security function on a departmentwide basis. As the department is currently organized, more than 600 information security officers in VA’s three administrations and its many medical facilities throughout the country are responsible for ensuring that appropriate security measures are in place. These information security officers report to their facility’s director or the chief information officer for their administration. However, there is neither direct nor indirect reporting to VA’s cyber security officer, thus raising questions about this official’s ability to enforce compliance with security policies and procedures and ensure accountability for actions taken throughout the department. Further, because VA’s information security budget relies on funding by its component administrations, the cyber security officer lacks control and accountability over a significant portion of the financial resources that the security program depends on to sustain its operations. Successfully managing information security under this organizational structure, therefore, will in large part depend on the extent to which VA’s business managers assume responsibility for implementing the appropriate policies and controls to mitigate risks, and work collaboratively and cooperatively with the cyber-security officer. Consequently, it will be essential for VA to hold its senior managers accountable for information security at their respective facilities and administrations. VA has taken a critical step toward achieving this by establishing security performance standards for its senior executives. These standards must be effectively applied and enforced, however, to ensure a successful outcome. For example, to help support its fiscal year 2002 security program budget request of about $55 million, VA expects to receive about $22 million in funding from VHA and $12 million from the department’s other administrations and offices. The VETSNET compensation and pension replacement effort grew out of an initiative that VBA undertook in 1986 to replace its outdated benefits delivery network (BDN) and modernize its compensation and pension, education, and vocational rehabilitation benefits payment systems. VBA had expected these modernized systems to provide a rich source for answering questions about veterans’ benefits and enable faster processing of benefits. In 1996, after experiencing numerous false starts and spending approximately $300 million on the overall modernization, VBA revised its strategy and began focusing on modernizing the compensation and pension (C&P) payment system. At that time, VBA estimated that the C&P replacement project would cost $8 million and be completed in May 1998. Since its inception, however, VBA has been plagued with problems in carrying out the C&P replacement initiative. As detailed in the attachment, our various publications since 1996 have highlighted consistent and longstanding concerns in several areas, including project management, requirements development, and testing. Our testimony last April noted that VBA had made some progress in developing and testing software products that would become part of the system. Nevertheless, we also noted that VBA had not addressed several important issues that were key to its successful implementation, including the need to develop an integrated project plan and schedule incorporating all of the critical areas of this system development effort. As our prior work has pointed out, a significant factor contributing to VBA’s continuing problems in developing and implementing the system has been the level of its capability to develop and maintain high-quality software on any major project within existing cost and schedule constraints—a condition that we identified during our 1996 assessment of the department’s software development capability. development. Moreover, VBA has not increased the number of payments using these new software products beyond the 10 original claims that it had pilot tested in February 2001. In addition, it continues to lack an integrated project plan and schedule that incorporate all of the critical areas of this system development activity. Further, VBA still has not obtained essential support from the field office staff that will be required to use the new software, and requirements for the new software have not yet been validated. These deficiencies are significant, given that the software application that VBA developed to assist veterans service representatives in rating benefits claims (Rating Board Automation 2000) did not meet users’ needs and achieved less timely claims processing results. At this time, VBA also is without a project manager to oversee the project. Progress made early in 2000 toward creating a project control board to manage the C&P replacement was curtailed when the project manager departed last April. Until VBA provides appropriate management and oversight for all aspects of the project’s development and implementation, it will not be positioned to ensure that this project will deliver a cost- effective solution with measurable and specific program-related benefits. Further, the schedule for implementing the replacement system continues to undergo change, resulting in additional delays. Last April, VBA had planned to deploy VETSNET in all of its 58 regional offices in July 2002. However, VBA officials have since modified the deployment time frame twice, with its latest proposal being to deploy each of the five applications separately over 2 years, beginning in June 2003. VBA management has not yet approved this latest strategy. Last year, the secretary expressed concerns about the VETSNET project and called for an independent audit of the C&P replacement system to facilitate his decision on whether to continue the initiative. Accordingly, a contractor was hired in May 2001 to assess (1) whether the system architecture will be capable of supporting VBA’s projected future workload, and (2) whether the system being developed will meet future functional, performance, and security needs. The contractor reported last September that the system architecture would be able to process VBA’s projected future workload. However, the contractor neither assessed nor reported on whether the system will meet future functional business needs, and the scope of its review did not generate sufficient information to fully evaluate and make an informed decision on whether the project should proceed. The review focused primarily on the system’s ability to perform efficiently under a heavy workload, and did not include user acceptance or the functional testing that is needed to ensure that the system can fully satisfy user requirements and that deployed software can be used without significant errors. Further, the review did not fully address the security requirements for the new system. VA’s department-level CIO agreed that the scope of the contractor’s review had been limited to a technical review of whether VETSNET could handle the anticipated workload. He also acknowledged the need for functional testing and an integrated project plan. Similar concerns about VBA’s strategy for the C&P replacement project were also documented in an October 2001 report issued by the VA claims processing task force. In its report, the task force emphasized that limited user and functional testing posed a major problem for VBA in developing and implementing its systems. The task force highlighted material deficiencies in VBA’s strategic planning and its implementation and deployment of new and enhanced information technology products and initiatives, as had been pointed out in an earlier report. Further, the task force questioned whether VETSNET represented a viable long-term solution, in part because it does not provide support for a redesigned and integrated claims process across VA’s administrations and offices. In commenting on these reports’ findings, VBA’s CIO stated that, by the end of March 2002, her office anticipated completing a remediation plan that will address the most critical concerns identified in the contractor’s review. She stated that the office is in the process of developing a statement of work to obtain contractor support to develop additional functional testing capability. The statement of work is scheduled for completion in June 2002. In addition, the CIO is negotiating with relevant VBA business groups to secure subject matter experts to validate business requirements and assist with the functional testing. If not promptly addressed, the problems and delays that have been noted in implementing the VETSNET project could have critical cost implications for the department and service delivery inefficiencies for the veteran community. In particular, without a replacement system, VA must continue to rely on the aging BDN to deliver its benefit payments, parts of which were developed in the 1960s. Although the BDN was enhanced to address year 2000 conversion issues, because of its anticipated replacement, VBA has since made only limited investments in maintaining it. Without additional maintenance, it is uncertain that the BDN will be able to continue accurately processing the many benefits payments that VBA must make. In its report, the claims processing task force warned that the system’s operations and support were approaching a critical stage, with the potential for performance to degrade and eventually cease. The task force recommended that the BDN be sustained and upgraded to ensure that payments to veterans would remain prompt and uninterrupted until VBA is able to field a replacement system. VBA officials have stated that they are working on a plan to address this issue. This plan is expected to include purchasing an additional mainframe computer to help extend the system’s operation until 2007—the date by which new systems are planned to be operational for all three benefits payment business lines. As you can see, Mr. Chairman, despite many years of work, VBA still has a number of fundamental tasks to accomplish before it can successfully complete development and implementation of the VETSNET project. Before proceeding with this project, VBA must assess and validate users’ requirements for the new system to ensure that business needs are met. It also needs to complete testing of the system’s functional business capability, as well as end-to-end testing to ensure payments are made accurately. Finally, it must establish an integrated project plan to guide its transition from the old to the new system. Until VBA performs a complete analysis of the initiative, as the secretary has indicated he would do, it is questionable whether additional resources should be expended on continued systems development activities. Unlike VBA’s work on VETSNET, VHA continues to make progress in expanding overall use of its decision support system (DSS). As you know, DSS is an executive information system designed to provide VHA managers and clinicians with data on patterns of patient care and patient health outcomes, as well as the capability to analyze resource utilization and the cost of providing health care services. VHA completed its implementation of DSS in October 1998. However, in September 2000, we testified that DSS had not been fully utilized since its implementation, and noted that DSS was not being used for all the purposes intended. Last April, we testified that VHA had shown moderate progress in increasing usage of DSS among its veterans integrated service networks (VISN) and medical centers, and encouraged VA to continue providing top management support to ensure that the system is fully utilized and that financial and clinical benefits are realized. Our testimony noted several efforts that VHA had undertaken to encourage greater use of DSS, including using DSS data to support the fiscal year 2002 resource allocation process and as a consideration in preparing VISN directors’ year-end performance appraisals, requiring VISN directors to provide examples of their reports and processes that rely on DSS data, and ensuring that medical centers’ processing of DSS data is current (no more than 60 days old). VHA’s initiatives to encourage greater use of DSS have yielded results. The use of DSS data in the fiscal year 2002 allocation process has clearly raised VHA’s awareness about the importance of this information. VHA’s most recent DSS processing report, dated January 31, 2002, revealed that all 22 VISNs had completed processing fiscal year 2001 DSS data and that seven VISNs had begun processing fiscal year 2002 data. Further, every VISN has provided both clinical and financial examples of DSS usage, and this information is now being considered in the quarterly reviews of the VISN directors' performance. As a result, VHA’s managers have grown more knowledgeable about and have begun to make more informed decisions regarding the cost of care being provided by their facilities. VHA continues to explore other initiatives to improve the accuracy and completeness of DSS data. In response to a report issued by VA’s inspector general in March 1999,regarding the failure of some medical facilities to follow the DSS basic structure for capturing workload data and associated costs, VHA has taken several actions, including implementing a VHA decision support system standardization directive that requires annual standardization audits and the reporting of consecutive repeat occurrences of non-compliance to the assistant deputy under secretary for health; developing an audit tool for use in determining a facility's compliance with the DSS basic model for capturing workload data and associated costs; and performing a standardization audit in September 2001 to assess the extent to which each facility’s DSS departments and products complied with national standards. Further, in response to managers’ concerns that DSS data are not timely and easy to access, the DSS program office initiated several actions. These include establishing a working group last July to identify best practices and recommend actions for improving processing efficiency and the timeliness and availability of DSS data. To date, the working group has provided all DSS sites with an updated monthly guide detailing each step of the process, and has distributed a pharmacy rejects database and a step- by-step guide for processing these rejects. These products should help increase the efficiency of the monthly processing and facilitate more accurate and timely data. In addition, the program office has authorized two sites to pilot test an application aimed at providing the end user or manager with a user-friendly front end to display DSS information and allow patient inquiry. In addition, several VISNs have independently begun exploring options for providing easier access to DSS data. For example, one is examining the feasibility of establishing a data warehouse where data extracted from DSS can be transformed into a format that will facilitate queries and reports that are simple to create and quick to run. Another has begun building a data repository for use in creating an application to compile and deliver data requested by managers or clinicians. Even with these accomplishments, however, top management involvement and continued support will be critical to ensuring that VHA continues to make progress in improving the operational efficiency and effectiveness of DSS, and that it realizes the full clinical and financial benefits of this system. In March 2001, oversight for the DSS program was transferred from VHA’s chief information officer to its chief financial officer. Since that time, VHA has also assigned three different acting directors to lead the program. However, VHA has not yet selected a permanent director to provide consistent management and oversight. In addition, of 56 personnel positions allotted to the DSS program office, 19 positions had not been filled at the end of January 2002. Without a permanent director to lead the DSS program or full staffing to support the system’s operation, VHA runs the risk that continued increases in usage of DSS, along with its associated benefits, could be imperiled. Veterans integrated service network 13 (Minneapolis, Minnesota) Mr. Chairman, you also asked us to update you on VA’s progress, in conjunction with the Department of Defense (DOD) and the Indian Health Service (IHS), in achieving the ability to share patient health care data as part of the government computer-based patient record (GCPR) project. Having readily accessible data to facilitate services to our nations’ military personnel and others has proved particularly significant in light of recent terrorist actions and the associated responses that have been required. The GCPR project developed out of VA and DOD discussions about ways to share data in their health information systems and from efforts to create electronic records for active duty personnel and veterans. As you know, the patients served by VA’s and DOD’s systems tend to be highly mobile, and consequently, their health records may be at multiple federal and nonfederal medical facilities, both in and outside of the United States. In November 1997, the president called for the two departments to develop a “comprehensive, life-long medical record for each service member,” and in August 1998—8 months after the GCPR project was officially established—issued a directive requiring VA and DOD to develop a “computer-based patient record system that will accurately and efficiently exchange information.” IHS later became involved because of its expertise in population-based research and its longstanding relationship with VA in caring for the Indian veteran population. As originally envisioned, GCPR was not intended to be a separate computerized health information system, nor was it meant to replace VA’s, DOD’s, and IHS’s existing systems. Rather, it was intended to allow physicians and other authorized users at these agencies’ health facilities to access data from any of the other agencies' health facilities by serving as an electronic interface among their health information systems. The interface was expected to compile requested patient information in a temporary, “virtual” record, that could be displayed on a user’s computer screen. In April 2001, we reported that expanding time frames and cost estimates, as well as inadequate accountability and poor planning, tracking and oversight, had raised doubts about GCPR’s ability to provide the benefits expected. In particular, we noted that the project’s time frames had significantly expanded and that its costs had continued to increase. In addition, basic principles of sound IT project planning, development, and oversight had not been followed, creating barriers to progress. For example, clear goals and objectives had not been set; detailed plans for developing, testing, and implementing the new software had not been established; and critical decisions regarding goals, costs, and time frames were not binding on all parties. Further, data exchange and privacy and security issues critical to the project’s success remained to be addressed. As a result of these concerns, we recommended that the three agencies (1) designate a lead entity with final decisionmaking authority and establish a clear line of authority for the GCPR project and (2) create comprehensive and coordinated plans that included an agreed-upon mission and clear goals, objectives, and performance measures, to ensure that the agencies can share comprehensive, meaningful, accurate, and secure patient health care data. In commenting on the report, VA, DOD, and IHS all concurred with our findings and recommendations. Nonetheless, progress on the GCPR initiative continues to be disappointing. The scope of the project increasingly has been narrowed from its original objectives and it continues to proceed without a comprehensive strategy. For example, in responding to our report, VA, DOD, and IHS provided information on a new, near-term strategy for GCPR. However, this revised strategy is considerably less encompassing than the project was originally intended to be. Specifically, rather than serve as an interface to allow data sharing across the three agencies’ disparate systems, as originally envisioned, a first phase of the revised strategy calls only for a one-way transfer of data from DOD’s current health care information system to a separate database that VA hospitals can access. While even this degree of data sharing is a positive development, VA’s clinicians, nonetheless, will only be allowed to read, but not perform any calculations on the data received. VA and DOD officials had initially planned to implement this near-term capability in November 2001, but recently stated that they now expect to do so by this July 2002. Further, the officials stated that they plan to change the name of the project to the Federal Health Information Exchange. Subsequent phases of the effort that were to further expand GCPR’s capabilities have also been revised. A second phase that would have enabled information exchange among all three agencies—VA, DOD, and IHS—is now expected to enable only a bilateral read-only exchange of data between VA and IHS. Further, according to VA officials, plans for a third phase, which was to expand GCPR’s capabilities to public and private national health information standards groups, are no longer being considered for the project. Instead, the third phase is now expected to focus only on expanding the data exchange between VA and IHS and allowing limited data calculations and some translation of terminology between the two agencies. Under the revised strategy, there are no plans for DOD to receive data from VA. In addition, concerns expressed in our April 2001 report still need to be addressed. For example, the GCPR project continues to operate without clear lines of authority or a lead entity responsible for final decisionmaking. Last August, the VHA CIO informed us that a draft memorandum of agreement, designating VHA as the lead entity, was being considered within VA, DOD, and IHS. However, this memorandum had not been approved or implemented at the time that we concluded our review. The project also continues to move forward without comprehensive and coordinated plans, including an agreed-upon mission and clear goals, objectives, and performance measures. Without clearly defined lines of authority and a comprehensive and coordinated strategy, even the revised GCPR initiative is destined to continue on an uncertain course—one that is unlikely to deliver substantial results. In summary, VA has made good progress toward addressing a number of important information technology concerns, but it still has much work to do. Its current leadership is to be commended for the dedication that it has demonstrated regarding VA’s information technology problems. However, in totality, the steps taken to date have not been sufficient to overcome the wide range of deficiencies that threaten VA’s operational effectiveness. Many of VA’s problems are longstanding and pervasive, and can be attributed to fundamental weaknesses in management accountability— some of which can only be overcome through serious restructuring of current reporting relationships and lines of authority. Until VA makes a concerted effort to ensure that all necessary processes and controls exist to guide the management of its information technology program, it will continue to fall short of its goals of enhancing operational efficiency and, ultimately, improving service delivery to our nation’s veterans. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For information about this testimony, please contact me at (202) 512-6257 or by e-mail at mcclured@gao.gov. Individuals making key contributions to this testimony included Nabajyoti Barkakati, Amanda C. Gill, David W. Irvin, Tonia L. Johnson, Valerie C. Melvin, Barbara S. Oliver, J. Michael Resser, Rosanna Villa, and Charles M. Vrabel.
The Department of Veterans Affairs (VA) has laid the groundwork for an integrated, departmentwide enterprise architecture--a blueprint for evolving its information systems and developing new systems to optimize their mission value. Crucial executive support is in place and the department has a strategy to define products and processes critical to its development. VA is now recruiting a chief architect to help implement and manage the enterprise architecture. VA has tried to strengthen its information security management program by mandating information security performance standards and greater management accountability for senior executives. It has also updated security policies, procedures, and standards to implement critical security measures. Despite these efforts, VA continues to report pervasive and serious information security weaknesses. The Veterans Benefits Administration is still far from launching a modernized system to replace its aging benefits delivery network. The Veterans Health Administration (VHA) has made good progress in expanding the use of its decision support system (DSS) for clinical and financial decision making. The use of DSS data for the fiscal year 2002 resource allocation process, and a requirement that veteran integrated service network directors better account for their use of this system, have raised awareness of, and promoted its use, among VHA facilities. VA has made little progress in sharing data with the Department of Defense and Indian Health Service as part of a computer-based patient record initiative. Implementation strategies continue to be revised, the scope of the initiative has been substantially narrowed, and it continues to operate without clear lines of authority or comprehensive, coordinated plans.
The South Florida ecosystem covers about 18,000 square miles in 16 counties. It extends from the Kissimmee Chain of Lakes south of Orlando to Lake Okeechobee, and continues south past the Florida Bay to the reefs southwest of the Florida Keys. The ecosystem is in jeopardy today because of past efforts that diverted water from the Everglades to control flooding and to supply water for urban and agricultural development. The Central and Southern Florida project, a large-scale water control project begun in the late 1940s, constructed more than 1,700 miles of canals and levees and over 200 water control structures that drain an average of 1.7 billion gallons of water per day into the Atlantic Ocean and the Gulf of Mexico. This construction resulted in insufficient water for the natural system and for the growing population, along with degraded water quality. Today, the Everglades has been reduced to half its original size and the ecosystem continues to deteriorate because of the alteration of the water flow, impacts of agricultural and industrial activities, and increasing urbanization. In response to growing signs of ecosystem deterioration, federal agencies established the South Florida Ecosystem Restoration Task Force in 1993 to coordinate ongoing federal restoration activities. The Water Resources Development Act of 1996 formalized the Task Force and expanded its membership to include state, local, and tribal representatives, and charged it with coordinating and facilitating efforts to restore the ecosystem. The Task Force, which is chaired by the Secretary of the Department of the Interior, consists of 14 members representing 7 federal agencies, 2 American Indian tribes, and 5 state or local governments. To accomplish the restoration, the Task Force established the following three goals: Get the water right. The purpose of this goal is to deliver the right amount of water, of the right quality, to the right places, at the right times. However, restoring a more natural water flow to the ecosystem while providing adequate water supplies and controlling floods will require efforts to expand the ecosystem’s freshwater supply and improve the delivery of water to natural areas. Natural areas of the ecosystem are made up of federal and state lands, and coastal waters, estuaries, bays, and islands. Restore, preserve, and protect natural habitats and species. To restore lost and altered habitats and recover the endangered or threatened species native to these habitats, the federal and state governments will have to acquire lands and reconnect natural habitats that have become disconnected through growth and development, and halt the spread of invasive species. Foster compatibility of the built and natural systems. To achieve the long-term sustainability of the ecosystem, the restoration effort has the goal of maintaining the quality of life in urban areas while ensuring that (1) development practices limit habitat fragmentation and support conservation and (2) traditional industries, such as agriculture, fishing, and manufacturing, continue to be supported and do not damage the ecosystem. The centerpiece for achieving the goal to get the water right is the Comprehensive Everglades Restoration Plan (CERP), approved by the Congress in the Water Resources Development Act of 2000 (WRDA 2000). CERP is one of the most ambitious restoration efforts the federal government has ever undertaken. It currently encompasses 60 individual projects that will be designed and implemented over approximately 40 years. These projects are intended to increase the water available for the natural areas by capturing much of the water that is currently being diverted, storing the water in many different reservoirs and storage wells, and releasing it when it is needed. The cost of implementing CERP will be shared equally between the federal government and the state of Florida and will be carried out primarily by the U.S. Army Corps of Engineers (the Corps) and the South Florida Water Management District (SFWMD), which is the state authority that manages water resources for South Florida. After the Corps and SFWMD complete the initial planning and design for individual CERP projects, they must submit the proposed projects to the Congress to obtain authorization and funding for construction. In addition to the CERP projects, another 162 projects are also part of the overall restoration effort. Twenty-eight of these projects, when completed, will serve as the foundation for many of the CERP projects and are intended to restore a more natural water flow to Everglades National Park and improve water quality in the ecosystem. Nearly all of these “CERP- related” projects were already being designed or implemented by federal and state agencies, such as the Department of the Interior and SFWMD, in 2000 when the Congress approved CERP. The remaining 134 projects include a variety of efforts that will, among other things, expand wildlife refuges, eradicate invasive species, and restore wildlife habitat, and are being implemented by a number of federal, state, and tribal agencies, such as the U.S. Fish and Wildlife Service, the Florida Department of Environmental Protection (FDEP), and the Seminole Tribe of Florida. Because these projects were not authorized as part of CERP and do not serve as CERP’s foundation, we refer to them as “non-CERP” projects. Success in completing the restoration effort and achieving the expected benefits for the ecosystem as quickly as possible and in the most cost- effective manner depends on the order, or sequencing, in which many of the 222 projects will be designed and completed. Appropriate sequencing is also important to ensure that interdependencies among restoration projects are not ignored. For example, projects that will construct water storage facilities and stormwater treatment areas need to be completed before undertaking projects that remove levees and restore a more natural water flow to the ecosystem. Recognizing the threats that Everglades National Park was facing, in 1993, UNESCO’s World Heritage Committee (WHC) included the Park on its List of World Heritage in Danger. This list includes cultural or natural properties that are facing serious and specific threats such as those caused by large-scale public or private projects or rapid urbanization; the outbreak or the threat of an armed conflict; calamities and cataclysms; and changes in water levels, floods, and tidal waves. The Park’s inclusion on the list resulted from five specific threats: (1) urban encroachment; (2) agricultural fertilizer pollution; (3) mercury contamination of fish and wildlife; (4) lowered water levels due to flood control measures; and (5) damage from Hurricane Andrew, which struck the south Florida peninsula in 1992 with winds exceeding 164 miles per hour. In 2006, WHC adopted a set of benchmarks that, when met, would lead to the Park’s removal from the list. According to Park and WHC documents, nine projects that are part of the overall restoration effort will contribute to the achievement of these benchmarks. Forty-three of the 222 projects that constitute the South Florida ecosystem restoration effort have been completed, while the remaining projects are currently being implemented or are either in design, being planned, or have not yet started. Table 1 shows the status of the 222 restoration projects. Completed Restoration Projects — Although 43 of the 222 projects have been completed since the beginning of the restoration effort, this total is far short of the 91 projects that the agencies reported would be completed by 2006. Nine projects were completed before 2000 when the strategy to restore the ecosystem was set. These projects are expected to provide benefits primarily in the area of habitat acquisition and improvement. Thirty-four projects were completed between 2000 and 2006. The primary purposes of these projects range from the construction of stormwater treatment areas, to the acquisition or improvement of land for habitat, to the drafting of water supply plans. Ongoing Restoration Projects — Of the 107 projects currently being implemented, 7 are CERP projects, 10 are CERP-related projects, and 90 are non-CERP projects. Five of the seven CERP projects are being built by the state in advance of the Corps’ completion of the necessary project implementation reports and submission of them to the Congress for authorization and appropriations. Nonetheless, some of the CERP projects currently in implementation are significantly behind schedule. For example, four of the seven CERP projects in implementation were originally scheduled for completion between November 2002 and September 2006, but instead will be completed up to 6 years behind their original schedule because it has taken the Corps longer than originally anticipated to design and obtain approval for these projects. Overall, 19 of the 107 projects currently being implemented have expected completion dates by 2010. Most of the remaining 88 projects are non-CERP habitat acquisition and improvement projects that have no firm end date because the land will be acquired from willing sellers as it becomes available. Projects Not Yet Implemented — Of the 72 restoration projects not yet implemented—in design, in planning, or not yet started—53 are CERP projects that are expected to be completed over the next 30 years and will provide important benefits such as improved water flow, additional water for restoration as well as other water-related needs. In contrast, the other 19 projects include 3 CERP-related and 16 non-CERP projects, which are expected to be completed by or before 2013. Consequently, the full environmental benefits for the South Florida ecosystem restoration that the CERP projects were intended to provide will not be realized for several decades. Several of the CERP projects in design, in planning, or not yet begun, were originally planned for completion between December 2001 and December 2005, but instead will be completed from 2 to 6 years behind their original schedule. According to agency officials CERP project delays have occurred for the following reasons: It took longer than expected to develop the appropriate policy, guidance, and regulations that WRDA 2000 requires for the CERP effort. Some delays were caused by the need to modify the conceptual design of some projects to comply with the requirements of WRDA 2000’s savings clause. According to this clause, CERP projects cannot transfer or eliminate existing sources of water unless an alternate source of comparable quantity and quality is provided, and they cannot reduce existing levels of flood protection. Progress was limited by the availability of less federal funding than expected and a lack of congressional authorization for some of the projects. The extensive modeling that accompanies the design and implementation of each project in addition to the “cumbersome” project review process may have also contributed to delays, as well as stakeholder comment, dispute resolution, and consensus-building that occurs at each stage of a project. Delays have occurred in completing the CERP-related Modified Water Deliveries to Everglades National Park (Mod Waters) project, which is a major building block for CERP. These delays, in turn, have delayed CERP implementation. Given the continuing delays in implementing critical CERP projects, the state has begun expediting the design and construction of some of these projects with its own resources. The state’s effort, known as Acceler8, includes most of the CERP projects that were among WRDA 2000’s 10 initially authorized projects, whose costs were to be shared by the federal government and the state. According to Florida officials, by advancing the design and construction of these projects with its own funds, the state hopes to more quickly realize restoration benefits for both the natural and human environments and to jump-start the overall CERP effort once the Congress begins to authorize individual projects. The Acceler8 projects include seven that are affiliated with CERP and an eighth that expands existing stormwater treatment areas. The state expects to spend more than $1.5 billion to design and construct these projects by 2011. Most of the restoration projects that would help Everglades National Park achieve the WHC’s benchmarks for removing the Park from its list of world heritage sites in danger have not been completed. According to Park and WHC documents, nine restoration projects were key to meeting these benchmarks. Table 2 lists the nine projects, the type of project, implementation status, and expected completion date. As table 2 shows, only one of the nine projects has been completed; four projects are ongoing and will not be completed until at least 2012; and four projects are still in planning and design and are not expected to be completed until some time between 2015 and 2035. In February 2007, the United States prepared a status report for the WHC on the progress made in achieving the benchmarks that the committee had established for the Park in 2006. Based on its review of this progress report, at a benchmarks meeting on April 2-3, 2007, the WHC’s draft decision was to retain Everglades National Park on the list of world heritage sites in danger; to recommend that the United States continue its commitment to the restoration and conservation of the Park and provide the required financial resources for the full implementation of the activities associated with CERP. WHC’s draft decision also requested that the United States provide an updated report by February 1, 2008 on the progress made towards implementation of the corrective measures. However, at the WHC session held between June 23 and July 2, 2007, the WHC decided to remove the Park from the list of world heritage sites in danger and commended the United States for the progress made in implementing corrective measures. In its final decision, the WHC encouraged the United States to continue its commitment to the restoration and provide the required financial resources for the full implementation of the activities associated with CERP. It is unclear from the WHC final decision document whether any additional or new information was provided to the committee that led to its final decision. No overall sequencing criteria guide the implementation of the 222 projects that comprise the South Florida ecosystem restoration effort. For the 60 CERP projects there are clearly defined criteria to be considered in determining the scheduling and sequencing of projects. However, the Corps has not fully applied these criteria when making CERP project sequencing decisions, because it lacked key data such as updated environmental benefits data and interim goals. As a result the Corps primarily relied on technical interdependencies and availability of funding as the criteria for making sequencing decisions. The Corps has recently started to revisit priorities for CERP projects’ and alter project schedules that were established in 2005 (this process is referred to as CERP-reset). However, because the Corps continues to lack certain key data for making sequencing decisions, the revised plan, when completed, will also not fully adhere to the criteria. Although CERP-related projects provide the foundation for many CERP projects, there are no established criteria for determining their implementation schedule and their estimated start and completion dates largely depend upon when and if the implementing agency will have sufficient funding to implement the project. For example, the construction of the Mod Waters project has been delayed several times since 1997 because, among other things, Interior did not receive enough funding to complete the construction of this project. This project is expected to restore natural hydrologic conditions across 190,000 acres of habitat in Everglades National Park and assist in the recovery of threatened and endangered plants and wildlife. The completion date for the Mod Waters Project has slipped again and it is now not expected to be completed until 2011. Because completion of this project is critical to the implementation of other CERP projects such as the Water Conservation Area 3 Decompartmentalization and Sheetflow Enhancement (Decomp) project— a project that many agency officials consider key to restoring the natural system—these delays will have a ripple effect on the completion date of this project as well. Similarly, for non-CERP projects, agencies reported that they do not have any sequencing criteria; instead, they decide on the scheduling and timing of these projects primarily if and when funding becomes available. For example, Florida has a land acquisition program to acquire lands for conservation and habitat preservation throughout the state, including for some non-CERP projects that are part of the South Florida ecosystem restoration effort. State officials have identified lands and added them to a list of priority projects proposed for acquisition throughout the state. However, whether or not these lands will be acquired for non-CERP projects depends on whether there is available funding in the annual budget, there are willing sellers, and the land is affordable based on the available funding. Because of the correct sequencing of CERP projects is essential to the overall success of the restoration effort, we recommended that the Corps obtain the data that it needs to ensure that all required sequencing criteria are considered and then comprehensively reassess its sequencing decisions to ensure that CERP projects have been appropriately sequenced to maximize the achievement of restoration goals. The agency agreed with our recommendation. From fiscal year 1999 through fiscal year 2006, federal and state agencies participating in the restoration of the South Florida ecosystem provided $7.1 billion for the effort. Of this total, federal agencies provided $2.3 billion and Florida provided $4.8 billion. Two agencies—the Corps and Interior—provided over 80 percent of the federal contribution. As figure 1 shows, federal and state agencies allocated the largest portion of the $7.1 billion to non-CERP projects for fiscal years 1999 through 2006. While federal agencies and Florida provided about $2.3 billion during fiscal years 1999 through 2006 for CERP projects, this amount was about $1.2 billion less than they had estimated needing for these projects over this period. This was because the federal contribution was $1.4 billion less than expected. This shortfall occurred primarily because CERP projects did not receive the congressional authorization and appropriations that the agencies had expected. In contrast, Florida provided a total of $2 billion over the period, exceeding its expected contribution to CERP by $250 million, and therefore making up some of the federal funding shortfall. Additionally, between July 31, 2000, and June 30, 2006, the total estimated cost for the South Florida ecosystem restoration grew from $15.4 billion to $19.7 billion, or by 28 percent. A significant part of this increase can be attributed to CERP projects; for these projects costs increased from $8.8 billion to $10.1 billion. This increase represents nearly 31 percent of the increase in the total estimated cost for the restoration. Agency officials reported that costs have increased for the restoration effort primarily because of inflation, increased land and construction costs, and changes in the scope of work. Furthermore, the costs of restoring the South Florida ecosystem are likely to continue to increase for the following reasons: Estimated costs for some of the projects are not known or fully known because they are still in the design and planning stage. For example, the total costs for one project that we examined—the Site 1 Impoundment project—grew by almost $36 million; from about $46 million to about $81 million after the design phase was completed. If other CERP projects, for which initial planning and design have not yet been completed, also experience similar increases in project costs, then the estimated total costs of not only CERP but the overall restoration effort will grow significantly. The full cost of acquiring land for the restoration effort is not known. Land costs for 56 non-CERP land projects, expected to total 862,796 acres, have not yet been reported. According to state officials, Florida land prices are escalating rapidly, owing primarily to development pressures. Consequently, future project costs are likely to rise with higher land costs. Similarly, while land acquisition costs for CERP projects are included as part of the total estimated project costs, thus far, the state has acquired only 54 percent of the land needed for CERP projects, at a cost of $1.4 billion. An additional 178,000 acres have yet to be acquired; the cost of these purchases is not yet known and is therefore not fully reflected in the cost of CERP and overall restoration costs. The cost of using new technologies for the restoration effort is unknown. The Congress authorized pilot projects in 1999 and 2000 to determine the feasibility of applying certain new technologies for storing water, managing seepage, and reusing treated wastewater. While the pilot projects have been authorized, the cost to construct or implement projects based on the results of the pilot projects is not yet known. In conclusion, Mr. Chairman, our review of the South Florida Ecosystem restoration effort shows that the some progress has been made in moving the restoration forward. However, the achievement of the overall goals of the restoration and ultimately improvements in the ecological condition of Everglades National Park depends on the effective implementation of key projects that have not progressed as quickly as was expected. Moreover, the shortfall in federal funding has contributed to some of these delays and at the same time the costs of the restoration continues to increase and we believe could rise significantly higher than the current estimate of almost $20 billion. In light of these concerns, we believe that restoring the South Florida Ecosystem and Everglades National Park, will continue to be a significant challenge for the foreseeable future. This concludes our prepared statement. We would be happy to respond to any questions you may have. If you have any questions about this statement, please contact Anu K. Mittal @ 202-512-3841 or mittala@gao.gov. Other contributors to this statement include Sherry McDonald (Assistant Director) and Kevin Bray. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The South Florida ecosystem covers about 18,000 square miles, and is home to the Everglades, one of the world's unique environmental resources. Historic efforts to redirect the flow of water through the ecosystem have jeopardized its health and reduced the Everglades to about half of its original size. In 1993, the United Nations Educational, Scientific, and Cultural Organization's World Heritage Committee (WHC) added Everglades National Park (Park) to its List of World Heritage in Danger sites. In 2000, a strategy to restore the ecosystem was set; the effort was expected to take at least 40 years and cost $15.4 billion. It comprises 222 projects, including 60 key projects known as the Comprehensive Everglades Restoration Plan (CERP), to be undertaken by a multiagency partnership. This testimony is based on GAO's May 2007 report, South Florida Ecosystem: Restoration Is Moving Forward, but Is Facing Significant Delays, Implementation Challenges, and Rising Costs, and a review of WHC decision documents relating to the Park's listing. This statement addresses the (1) status of projects implemented (2) status of projects key to improving the health of the Park, (3) project sequencing factors, and (4) funding provided for the effort and extent to which costs have increased. Of the restoration effort's 222 projects, 43 have been completed, 107 are being implemented, and 72 are in design, in planning, or are not yet started. The completed and ongoing projects will provide improved water quality and water flow within the ecosystem and additional habitat for wildlife. According to restoration officials, significant progress has been made in acquiring land, constructing water quality projects, and restoring a natural water flow to the Kissimmee River--the headwater of the ecosystem. Many of the policies, strategies, and agreements required to guide the restoration in the future are also now in place. However, the 60 CERP projects, which are the most critical to the restoration's overall success, are among those that are currently being designed, planned, or have not yet started. Some of these projects are behind schedule by up to 6 years. Florida recently began expediting the design and construction of eight key projects, with the hope that they would immediately benefit the environment, enhance flood control, and increase water supply, thus providing further momentum to the restoration. In 2006, the WHC adopted several key benchmarks that if met would facilitate removal of the Everglades National Park from its List of World Heritage in Danger sites. As noted by WHC, achievement of these benchmarks was linked to the implementation of nine key restoration projects. However, only one of these projects has been completed, four are currently being implemented and four are currently being designed. Moreover, the benefits of these projects will not be available for many years because most of the projects are scheduled for completion between 2011 and 2035. There are no overarching sequencing criteria that restoration officials use when making implementation decisions for all 222 projects that make up the restoration effort. Instead, decisions for 162 projects are driven largely by the availability of funds. There are regulatory criteria to ensure that the goals and purposes of the 60 CERP projects are achieved in a cost effective manner. However, the 2005 sequencing plan developed for these projects is not consistent with the criteria because some of the data needed to fully apply these criteria were not available. Therefore, there is little assurance that the plan will be effective. GAO recommended that the agencies obtain the needed data and then comprehensively reassess the sequencing ofthe CERP projects. From fiscal years 1999 through 2006, the federal government contributed $2.3 billion and Florida contributed $4.8 billion, for a total of about $7.1 billion for the restoration. However, federal funding was about $1.4 billion short of the funds originally projected for this period. In addition, the total estimated costs for the restoration have increased by 28 percent--from $15.4 billion in 2000 to $19.7 billion in 2006 because of project scope changes, increased construction costs, and higher land costs. More importantly, these cost estimates do not represent the true costs for the overall restoration effort because they do not include all cost components for a number of projects.
In recent years, we, Congress, the 9/11 Commission, and others have recommended that federal agencies with homeland security responsibilities utilize a risk management approach to help ensure that finite resources are dedicated to assets or activities considered to have the highest security priority. The purpose of risk management is not to eliminate all risks, as that is an impossible task. Rather, given limited resources, risk management is a structured means of making informed trade-offs and choices about how to use available resources effectively and monitoring the effect of those choices. Thus, risk management is a continuous process that includes the assessment of threats, vulnerabilities, and consequences to determine what actions should be taken to reduce or eliminate one or more of these elements of risk. To provide guidance to agency decision makers, we developed a risk management framework, which is intended to be a starting point for applying risk-informed principles. Our risk management framework entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Additional information on risk management, including our risk management framework, can be found in appendix I. DHS is required by statute to utilize risk management principles with respect to various DHS functions. With regard to the Coast Guard, federal statutes call for the Coast Guard to use risk management in specific aspects of its homeland security efforts. The Maritime Transportation Security Act of 2002 (MTSA), for example, calls for the Coast Guard and other port security stakeholders, through implementing regulations, to carry out certain risk-based tasks, including assessing risks and developing security plans for ports, facilities, and vessels. In addition, the Coast Guard Authorization Act of 2010 requires, for example, the Coast Guard to (1) develop and utilize a national standard and formula for prioritizing and addressing assessed security risks at U.S. ports and facilities, such as MSRAM; (2) require Area Maritime Security Committees to use this standard to regularly evaluate each port’s assessed risk and prioritize how to mitigate the most significant risks; and (3) make MSRAM available, in an unclassified version, on a limited basis to regulated vessels and facilities to conduct risk assessments of their own facilities and vessels. From 2001 to 2006, the Coast Guard assessed maritime security risk using the Port Security Risk Assessment Tool (PSRAT), which was quickly developed and fielded after the terrorist attacks of September 11, 2001. PSRAT served as a rudimentary risk calculator that ranked maritime critical infrastructure and key resources (MCIKR) with respect to the consequences of a terrorist attack and evaluated vessels and facilities that posed a high risk of a transportation security incident. While PSRAT provided a relative risk of targets within a port region, it could not compare and prioritize relative risks of various infrastructures across ports, among other limitations. Recognizing the shortcomings of PSRAT that had been identified by the Coast Guard and us, in 2005 the Coast Guard developed and implemented MSRAM to provide a more robust and defensible terrorism risk analysis process. MSRAM is a risk-based decision support tool designed to help the Coast Guard assess and manage maritime security risks throughout the Coast Guard’s area of responsibility. Coast Guard units throughout the country use this tool to assess security risks to over 28,000 key maritime infrastructure assets—also known as targets—such as chemical facilities, passenger terminals, and bridges, as well as vessels such as cruise ships, ferries, and vessels carrying hazardous cargoes, among other things. Unlike PSRAT, MSRAM is designed to capture the security risks facing different types of targets, allowing comparison between different targets and geographic areas at the local, regional, and national levels. MSRAM’s risk assessment methodology assesses the risk of a terrorist attack based on different scenarios; that is, it combines potential targets with different attack modes for each target/attack mode combination (see table 1). MSRAM automatically determines which attack modes are required to be assessed for each target type, though local MSRAM analysts have the ability to evaluate additional optional attack modes against any target. For each target/attack mode combination, MSRAM can provide different risk results, such as the inherent risk of a target and the amount of risk mitigated by Coast Guard security efforts. MSRAM calculates risk using the following risk equation: Risk = Threat x Vulnerability x Consequence. Numerical values representing Coast Guard’s assessment of threat (or relative likelihood of attack), vulnerability should an attack occur, and consequences of a successful attack are combined to yield a risk score for each maritime target. The model calculates risk using threat judgments provided by the Coast Guard Intelligence Coordination Center (ICC), and vulnerability and consequence judgments provided by MSRAM users at the sector level— typically Coast Guard port security specialists—which are reviewed at the district, area, and headquarters levels. The risk equation variables are as follows: Threat represents the relative likelihood of an attempted attack on a target. The ICC provides threat probabilities to MSRAM, based upon judgments regarding specific intent, capability, and geographic preference of terrorist organizations to deliver an attack on a specific type of maritime target class—for example, a boat bomb attack on a ferry terminal. To make these judgments, ICC officials use intelligence reports generated throughout the broader intelligence community to make qualitative determinations about certain terrorist organizations and the threat they pose to the maritime domain. At the sector level, Coast Guard MSRAM users do not input threat probabilities and are required to use the threat probabilities provided by the ICC. This approach is intended to ensure that threat information is consistently applied across ports. Vulnerability represents the probability of a successful attack given an attempt. MSRAM users at the sector level assess the vulnerability of targets within their respective areas of responsibility. Table 2 shows the factors included in the MSRAM vulnerability assessment. Consequence represents the projected overall impact of a successful attack on a given target or asset. Similar to vulnerability assessments, MSRAM users at the sector level assess the consequences of a successful attack on targets within their respective area of responsibility. Table 3 shows the factors included in the MSRAM consequence assessment. In addition to the consequence factors listed in table 3, sector MSRAM users also assess the response capabilities of the Coast Guard, port stakeholders, and other governmental agencies and their ability to mitigate death/injury, primary economic, and environmental consequences of a successful attack. Because there is a broad array of target types operating in the maritime domain that can result in different types of impacts if successfully attacked, MSRAM uses an approach for drawing equivalencies between the different types of impacts. This approach was based on establishing a common unit of measure, called a consequence point. One consequence point represents $1 million of equivalent loss to the American public. To support MSRAM development and risk analysis at the headquarters level, the Coast Guard has provided MSRAM-dedicated staff and resources. According to the Coast Guard, resources for MSRAM or port security risk analysis are not from a specific budget line item. From fiscal year 2006 to fiscal year 2011, the Coast Guard reported assigning from two to five staff (full-time equivalents) and from $0.6 million to $1.0 million annually to support MSRAM at headquarters. There are no MSRAM- dedicated staff at the area, district, and sector levels; rather, MSRAM assessment and analysis is generally conducted by port security specialists, who have other responsibilities. The port security specialist typically has responsibility for numerous activities, including the Port Security Grant Program, Area Maritime Security Committees, and Area Maritime Security Training Exercise Program, among others. The NIPP is DHS’s primary guidance document for conducting risk assessments and includes core criteria that identify the characteristics and information needed to produce quality risk assessment results. The NIPP’s basic analytical principles state that risk assessments should be complete, reproducible, documented, and defensible, as defined in table 4. MSRAM generally aligns with DHS’s criteria for a complete and reproducible risk assessment, but some challenges remain, such as the limited time for Coast Guard personnel to complete assessments. MSRAM also generally aligns with the NIPP criteria for a documented and defensible risk assessment, but the Coast Guard could improve its documentation of the model’s assumptions and other sources of uncertainty, such as the subjective judgments made by Coast Guard analysts about vulnerabilities and consequences, and how these assumptions and other sources of uncertainty affect MSRAM’s results. In addition to providing decision makers with an understanding of how to interpret any uncertainty in MSRAM’s risk estimates, greater transparency and documentation could facilitate periodic peer reviews of the model—a best practice in risk management. MSRAM generally aligns with NIPP criteria for a complete risk assessment. In accordance with NIPP criteria for a complete risk assessment, MSRAM assesses risk using three main variables— consequence, vulnerability, and threat. MSRAM’s risk assessment methodology also follows the NIPP criteria for factors that should be assessed in each of the three risk variables. Specifically, for threat, MSRAM generally follows the NIPP criteria by identifying attack methods that may be employed and by considering the adversary’s intent and capability to attack a target. MSRAM generally follows the vulnerability assessment criteria by estimating the likelihood of an adversary’s success for each attack scenario and describing the protective measures in place, and MSRAM generally follows the consequence assessment criteria by estimating economic loss in dollars, estimating fatalities, and describing psychological impacts, among other things. MSRAM’s risk assessment methodology also generally aligns with the NIPP criteria for a reproducible risk assessment. To be reproducible, the methodology must produce comparable, repeatable results and minimize the number and impact of subjective judgments, among other things. Although Coast Guard officials acknowledge that MSRAM risk data are inherently subjective, the MSRAM model and data collection processes include features designed to produce comparable, repeatable results across sectors. For instance, the Coast Guard prepopulates threat data into MSRAM from the Coast Guard’s ICC. This allows for nationally vetted threat scores that do not rely on multiple subjective local judgments. DHS, in its 2010 Transportation Systems Sector-Specific Plan, stated that MSRAM produces comparable, repeatable results. The Coast Guard has taken numerous actions that contribute to MSRAM being a complete and reproducible risk assessment model. To improve the quality and accuracy of MSRAM data and reduce the amount of subjectivity in the MSRAM process, the Coast Guard conducts an annual review and validation of MSRAM data produced at each sector; provides MSRAM users with tools, calculators, and benchmarks to assist in calculating consequence and vulnerability; and provides training to sectors on how to enter data into MSRAM. Specific actions are detailed below. Annual validation and review. The Coast Guard uses a multilevel annual validation and review process, which helps to ensure that MSRAM risk data are comparable and repeatable across sectors. According to a 2010 review of MSRAM, conducting a thorough review process across sectors is especially important if the data are to be used for national-level decision making. This process includes sector, district, area, and headquarters officials and aims to normalize MSRAM data by establishing national averages of risk scores for attack modes and targets and by identifying outliers. The annual MSRAM validation and review process begins with sectors completing vulnerability and consequence assessments for targets within their areas of responsibility. Once the sector Captain of the Port validates the assessments, the risk assessment data are sent to district and area officials for review. Following these reviews, Coast Guard headquarters officials combine each sector’s data into a national classified dataset and perform a statistical analysis of the data. The statistical analysis involves calculating national averages for vulnerability, consequence, and response When determining whether a sector’s risk score capabilities risk scores.for a specific target is questionable or is an outlier, reviewers consider the results of the statistical analysis as well as supporting comments or rationale provided by sector officials. According to the Coast Guard, for each outlier identified during the national review process, sector officials reconsider the data point in question and either change the inputs to reflect national averages or provide additional justification for why the risk score for the target in question should be outside of the national average. Headquarters officials explained that they generally accept justification for data outliers and that a goal of the review process is to spur discussions related to maritime risk rather than forcing compliance with national data averages. For example, officials from one sector told us that a small port in their sector is critical for their state’s energy imports, and accordingly, the port infrastructure is high risk on a national scale. The officials said that Coast Guard headquarters officials have questioned the relatively high risk rankings of the port’s infrastructure because they are statistical outliers, but have deferred to the expertise of the sector regarding the risk scores. Tools and calculators. Recognizing that sector port security specialists who assess risk using MSRAM generally do not have expertise in all aspects of assessing vulnerability and consequence, the Coast Guard has regularly added new tools and calculators to MSRAM to improve the quality, accuracy, and consistency of vulnerability and consequence assessments. For example, MSRAM now includes a blast calculator that allows users to more easily determine the death and injury consequences of an explosion close to population centers. Coast Guard officials from 29 sectors (82 percent of sectors) cited a variety of challenges with assessing vulnerability and consequence values in MSRAM, but officials from 10 sectors said that it was becoming easier to do over time and officials from 14 sectors said that the tools and calculators in MSRAM have helped. Benchmarks and recommended ranges. To limit inconsistencies caused by different judgments by individual MSRAM users and to minimize user subjectivity, the Coast Guard built into MSRAM a suggested range of scores for each risk factor—including vulnerability, consequence, and response capabilities—as well as averages, or benchmarks, of scores for each factor. The benchmarks are based on Coast Guard and expert evaluation of target classes and attack modes. The benchmarks and recommended ranges are reviewed and updated each year following the annual data revalidation cycle. Training. The Coast Guard has also provided annual training for MSRAM users, including beginning, intermediate, and advanced courses intended to standardize the data entry process across the Coast Guard. Officials from 34 sectors (97 percent) reported finding the training moderately to very useful in terms of enhancing their ability to assess, understand, and communicate the risks facing their sectors. In 2011, Coast Guard headquarters also started providing live web-based training sessions on various MSRAM issues, such as resolving national review comments, to help sector staff gain familiarity with MSRAM’s features on an as-needed basis. In addition to MSRAM training provided by headquarters, one Coast Guard district official we spoke with had developed and provided localized training to the sector-level port security specialists on assessing the vulnerability of chemical facilities. The district official told us that Coast Guard headquarters was interested in this local model for delivering training and was planning to pilot a similar training program in a different district. MSRAM generally aligns with DHS’s criteria for a complete and reproducible risk assessment, but challenges remain with the MSRAM methodology and risk assessment process. The Coast Guard has acknowledged these challenges and limitations and has actions underway to address them and make MSRAM more complete and reproducible. Coast Guard officials noted that some of these challenges are not unique to MSRAM and are faced by others in the homeland security risk assessment community. Specific challenges are detailed below. Data subjectivity. While the Coast Guard has taken actions to minimize the subjectivity of MSRAM data, officials acknowledged that assessing threat, vulnerability, and consequence is inherently subjective. To assess threat, the Coast Guard’s ICC quantifies judgments related to the intent and capability of terrorist organizations to attack domestic maritime infrastructure. However, there are limited national historic data for domestic maritime attacks and thus intelligence officials must make a number of subjective judgments and draw inferences from international maritime attacks. Further, GAO has previously reported on the inherently difficult nature of assessing the capability and intent of terrorist groups.Vulnerability and consequence assessments in MSRAM are also inherently subjective. For example, officials from 20 sectors we interviewed said that even with training, tools, and calculators, assessing consequences can be challenging and that it often involved subjectivity and uncertainty. Officials noted that assessing economic impacts—both primary and secondary—was particularly challenging because it required some level of expertise in economics—such as supply chains and industry recoverability—which port security specialists said is often beyond their skills and training. The input for secondary economic impacts can have a substantial effect on how MSRAM’s output ranks a target relative to other potential targets. Undervaluing secondary economic impacts could result in a lower relative risk ranking that underestimates the security risk to a target, or inversely, overvaluing secondary economic impacts could result in overestimating the security risk to a target. Recognizing the challenges with assessing secondary economic impacts, Coast Guard officials said they are working with the DHS Office of Risk Management and Analysis to study ways to more accurately assess secondary economic impacts. Additionally, during the course of our review the Coast Guard implemented a tool called IMPLAN that has the potential to inform judgments of secondary economic impacts by showing what the impact could be for different terrorist scenarios. Limited time to complete assessments. Officials from 19 sectors (54 percent) told us that the lack of time to complete their annually required vulnerability and consequence assessments is a key challenge and many expressed that they believed their sector’s data suffered in quality as a result. Each year, sectors are required to update and validate their risk assessments for targets in their areas of responsibility, which can involve site visits to port facilities and discussions with facility security officers to obtain information on vulnerability and consequences. Officials from a Gulf Coast sector noted that obtaining this information from facilities can be challenging because of the number of facilities in the sector and the time involved in meeting with each facility. Officials from an inland river sector also noted that gathering data from certain facilities—such as information on a chemical plant’s security enhancements or the expected loss of life from a terrorist attack—is challenging because facilities may not want to share proprietary information that could be damaging in the hands of a competitor. As a result, it often takes additional visits, phone calls, e-mails, and time to obtain this information. Officials from a northeastern sector said that having the people and time to update MSRAM data is their key challenge and completing the update is a heavy lift because the update is required at the same time as several other requirements, such as reviewing investment justifications for the Port Security Grant Program. Coast Guard sector officials and one district official we spoke with reported raising concerns to headquarters about the time it takes to complete MSRAM assessments. Headquarters staff also said they were looking into additional ways to make the assessment process easier for sectors, such as providing job aids and examining the possibility of completing the data update at different times in the year. Limitations in modeling methodology—adaptive terrorist behavior. There are inherent limitations in the overall methodology the Coast Guard uses to model risk. For instance, MSRAM threat information does not account for adaptive terrorist behavior, which is defined by the National Research Council as an adversary adapting to the perceived defenses around targets and redirecting attacks to achieve its goals. Accounting for adaptive terrorist behavior could be modeled by making threat a function of vulnerability and consequence rather than the MSRAM formula which treats threat, vulnerability, and consequence as independent variables.a critique of MSRAM raised by terrorism risk assessment experts. For example, officials from the DHS Office of Risk Management and Analysis have stressed the need to account for adaptive terrorist behavior in risk models. In addition, DHS’s 2011 Risk Management Fundamentals guidance states that analysts should be careful when calculating risk by multiplying threats, vulnerabilities, and consequences (as MSRAM does), Not accounting for adaptive terrorist behavior is especially for terrorism, because of interdependencies between the three variables. Coast Guard officials agreed with the importance of accounting for adaptive terrorist behavior and with the risks of treating threat, vulnerability, and consequence as independent variables. The officials explained that although they did not design MSRAM to account for adaptive terrorist behavior, they are working to develop the Dynamic Risk Management Model, which will potentially address this issue. For more information on network effects, see Gerald G. Brown, W. Matthew Carlyle, Javier Salmerón, and Kevin Wood, Operations Research Department, Naval Postgraduate School, Analyzing the Vulnerability of Critical Infrastructure to Attack and Planning Defenses (Monterey, Calif.: 2005). initiatives to identify and document networked systems of targets that if successfully attacked would have large ripple effects throughout the port or local economy. Coast Guard officials agreed that assessing network effects is a challenge and they are examining ways to meet this challenge. However, the Coast Guard’s work in this area is still in its infancy and there is uncertainty regarding the way in which the agency will move forward in measuring network effects. MSRAM is generally documented and defensible, but the Coast Guard could improve its documentation of the model’s assumptions and other sources of uncertainty, such as subjective judgments made by Coast Guard analysts about threats, vulnerabilities, and consequences, and how these assumptions and other sources of uncertainty affect MSRAM’s results. The NIPP states that for a risk assessment methodology to be documented, any assumptions and subjective judgments need to be transparent to the individuals who are expected to use the results. For a risk assessment methodology to be defensible, uncertainty associated with consequence estimates and the level of confidence in the vulnerability and threat estimates should also be communicated to users of the results. There are multiple assumptions and other sources of uncertainty in MSRAM. For example, assumptions used in MSRAM include the particular dollar value for a statistical life or the assumed dollar amount of environmental damage resulting from oil or hazardous material spilled as the result of a terrorist attack. MSRAM also relies on multiple subjective judgments made by Coast Guard analysts, which mean a range of possible values for risk calculated from the model. For example, to assess risk in MSRAM, Coast Guard analysts make judgments regarding such factors as the likelihood of success in interdicting an attack and the number of casualties expected to result from an attack. These subjective judgments are sources of uncertainty with implications that, according to the NIPP and risk management best practices, should be documented and communicated to decision makers. MSRAM’s primary sources of documentation provide information on how data are used to generate a risk estimate and information on some assumptions, and the Coast Guard has made efforts to document and reduce the number of assumptions made by the field-level user in order to increase the consistency of MSRAM’s data. For example, the MSRAM training and software manual states that MSRAM users are expected to specify the assumptions they make in evaluating various attack modes and provides assumptions for users to consider when scoring attack scenarios, such as specifying the type and amount of biological agent used in a biological attack scenario and assuming that attackers are armed and suicidal in a boat bomb attack scenario. While these documentation efforts are positive steps to reduce MSRAM data subjectivity and increase data consistency, we found that the Coast Guard has not documented all the sources of uncertainty associated with threat, vulnerability, and consequence assessments and what implications this uncertainty has for interpreting the results, such as an identification of the highest-risk targets in a port. As a result, decision makers do not know how robust the risk rankings of targets are and the degree to which a list of high-risk targets could change given the uncertainty in the risk model’s inputs and parameters. Moreover, overlapping ranges of possible risk values caused by uncertainty could have implications for strategic decisions or resource allocation, such as allocating grant funding or targeting patrols. Overlapping ranges of risk values due to uncertainty also underscores the importance of professional judgment in decision making because risk models do not produce precise outcomes that should be followed without a degree of judgment and expertise. According to the NIPP, the best way to communicate uncertainty will depend on the factors that make the outcome uncertain, as well as the amount and type of information that is available. The NIPP states that in any given terrorist attack scenario there is often a range of outcomes that could occur, such as a range of dollar amounts for environmental damage or a range of values for a statistical life. For some incidents, the range of outcomes is small and a single estimate may provide sufficient data to inform decisions. However, if the range of outcomes is large, the scenario may require additional specificity about conditions to obtain appropriate estimates of the outcomes. Often, this means providing a range of possible outcomes rather than a single point estimate. Coast Guard officials agreed with the importance of documenting and communicating the sources and implications of uncertainty for MSRAM’s risk estimates, and noted that they planned to develop this documentation as part of an internal MSRAM verification, validation, and accreditation (VV&A) process that they expect to complete in the fall of 2011. According to the Coast Guard, accreditation is an official determination that a model or simulation is acceptable to use for a specific purpose. While this accreditation process is expected to document the scope and limitations of MSRAM’s capabilities and determine whether these capabilities are appropriate for MSRAM’s current use, the Coast Guard’s draft accreditation plan does not discuss how the Coast Guard plans to assess and document uncertainty in its model or communicate those results to decision makers. National Research Council of the National Academies, Review of the Department of Homeland Security’s Approach to Risk Analysis. addressed and should address the structure of the model, the types and certainty of the data, and how the model is intended to be used. Peer reviews can also identify areas for improvement and can facilitate sharing best practices. As we have previously reported, external peer reviews cannot ensure the success of a model, but they can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process. MSRAM has been reviewed twice—in 2010 by risk experts affiliated with the Naval Postgraduate School and, to a lesser extent, in 2009 by CREATE at the University of Southern California. The authors of the Naval Postgraduate School report stated that their review was intended to validate and verify the equations used in MSRAM, evaluate MSRAM’s quality control procedures, and review the use of MSRAM outputs to manage risk. The authors of the CREATE report stated that their review focused on suggestions for improvement rather than a comprehensive evaluation, and they suggested that the Coast Guard continue to seek feedback and reviews from the risk and decision analysis community, as well as from practitioners of other disciplines. Coast Guard officials told us that they have generally benefited from reviews of MSRAM and have worked to implement many of the resulting recommendations. Officials noted they intend to pursue external reviews of MSRAM as part of the ongoing VV&A process, but they have not identified who would be conducting the reviews, or when the reviews would occur. As the Coast Guard’s risk assessment model continues to evolve, the Coast Guard could benefit from periodic external peer review to ensure that the structure and outputs of the model are appropriate for its given uses and to identify possible areas for improvement. MSRAM is a security risk analysis and risk management tool and the Coast Guard intends for it to be used to inform risk management decisions at all levels of command. As such, in a May 2011 guidance document, the Coast Guard set expectations for how MSRAM should be used at the national and sector levels. At the national level, the Coast Guard expects its offices to use MSRAM to support strategic plans, policy, and guidance; to integrate MSRAM into maritime security programs; and to ensure that sectors have adequate personnel ready to perform MSRAM duties, among other goals. Operational activities include conducting boat escorts, implementing positive control measures—that is, stationing armed Coast Guard personnel in key locations aboard a vessel to ensure that the operator maintains control—and providing a security presence through various actions. (MSRO) program. By identifying the nation’s highest-risk maritime targets, MSRAM helps establish the national MCIKR list, which sectors use to complete their annually required number of MCIKR visits. According to Coast Guard officials, MSRAM has aided in reducing the MCIKR list from 740 assets to 324 assets and allowed the Coast Guard to further prioritize within that more focused list of 324, since MSRAM analysis demonstrated that a small number of assets make up the majority of the nation’s risk. MSRAM has also been used as a tool to inform resource allocation and performance measurement, which is consistent with the Coast Guard’s goals for MSRAM. For instance, risk-informed methods and processes or models, such as MSRAM, are used in the Coast Guard’s annual Standard Operational Planning Process, which establishes a standardized process to apportion major assets, such as boats, aircraft, and deployable specialized forces. Coast Guard officials said that MSRAM data supports the PWCS mission in this process by demonstrating how risk is distributed geographically. In addition, Coast Guard used MSRAM to support a funding request for boats, personnel, and associated support costs to assist with Coast Guard efforts to reduce the risk of certain dangerous cargoes by escorting ships passing through coastal ports carrying cargoes such as liquefied natural gas. MSRAM also supports resource allocation through the Port Security Grant Program by informing the risk formula used by DHS to allocate grant funding. MSRAM data are also used in the Coast Guard’s model for measuring its performance in the PWCS mission, which is discussed in depth later in this report. MSRAM has also supported strategic documents and efforts throughout DHS. Specifically, the Coast Guard reported that MSRAM data are an essential building block for a number of key strategic documents, such as the National Maritime Strategic Risk Assessment, the National Maritime Terrorism Threat Assessment, and the Combating Marine Terrorism Strategic and Performance Plan, among others. In addition, the Coast Guard uses MSRAM, among other inputs, to provide DHS with maritime risk information for the Transportation Sector Security Risk Assessment tool. DHS also reported that the Coast Guard has shared MSRAM- based identification of critical assets beyond the transportation system with 13 of the 18 DHS critical infrastructure and key resource sectors. For example, MSRAM has been used to assess the risk of some chemical facilities and power plants. MSRAM has been used to inform a variety of efforts at the sector level, such as strategic planning, communication with port stakeholders, and operational and tactical decision making, but its use for operational and tactical risk management efforts has been limited by a lack of staff time, the complexity of the MSRAM tool, and competing mission demands, among other factors. The Coast Guard expects its 35 sectors, with support from its nine districts, to integrate MSRAM data into strategic, operational, and tactical plans, operations, and programs as necessary and required, among other actions. Based on results from our interviews with officials from all 35 Coast Guard sectors, officials from 26 sectors (74 percent) reported finding MSRAM moderately to very useful for informing strategic planning, which includes developing portions of local Area Maritime Security Plans and planning security exercises. sector reported using MSRAM to find the highest-risk areas in which to conduct exercises. Further, lessons learned from the exercises are incorporated into strategic plans, which officials said leads to planning process improvements and overall better plans. However, officials from a southeastern sector pointed out that MSRAM is a snapshot view of port risk and therefore long-term strategic plans require additional information from many sources. Area Maritime Security Plans have been established pursuant to the Maritime Transportation Security Act of 2002. Content requirements for the plans were established by 33 C.F.R. § 103.505 and expanded by the Security and Accountability For Every Port (SAFE Port) Act of 2006 to include a Salvage Response Plan. The plans are intended to sponsor and support engagement with port community stakeholders to develop, test, and when necessary, implement joint efforts for responding to and mitigating the effects of a maritime transportation security incident. percent) said that MSRAM was moderately to very useful. For instance, officials from a southeastern sector said that MSRAM is used to communicate and justify additional security procedures. Further, during annual compliance inspections, MSRAM data are discussed with facility security officers and compared to security data that the facility security officers have calculated. In addition, officials from a Gulf Coast sector reported that MSRAM provides a convenient, objective way to communicate risk to port security stakeholders, and stakeholders appreciate that risk information from MSRAM is computer driven and based on a rigorous process. For informing sector operational and tactical decision making, such as planning MSRO activities, developing local critical infrastructure lists, and planning for special events, officials from 18 sectors (51 percent) reported that MSRAM moderately or greatly provided them with the information needed to make risk-informed decisions regarding port security. Regarding planning MSRO activities, one eastern sector reported that MSRAM was very helpful for identifying priority targets for MSRO patrols and escorts. Regarding developing local critical infrastructure lists, officials from an eastern sector said that since the sector has no assets on the national MCIKR list, they were able to use MSRAM to generate a local list to help determine patrols and other security efforts. Regarding special event planning, officials from 16 sectors (45 percent) told us they used MSRAM to determine where to allocate resources for special events, such as the Fourth of July, dignitary visits, or political conventions. For example, officials from an inland river sector said that they used MSRAM to identify possible attack scenarios and to help identify what security resources they should request to provide security for a special event. See figure 1 for photographs of various Coast Guard security-related activities that can be informed by MSRAM. In addition to using MSRAM to inform maritime security decisions, officials from almost every sector noted that they also assess and manage risk using other tools or methods, such as the High Interest Vessel matrix, outreach to port partners, working relationships with Area Maritime Security Committees, or professional judgment. Although officials from most sectors found that MSRAM provided useful risk information for sector-level decision making, officials from 32 sectors (91 percent) reported that their overall use of MSRAM data in managing risk was hindered by a lack of staff time for data analysis, the complexity of the MSRAM tool, or competing mission demands, among other things. These challenges are discussed below. Limited staff time for analyzing and using MSRAM. Officials from 21 sectors (60 percent) told us that limited staff time posed a challenge to incorporating MSRAM into strategic, operational, and tactical planning efforts. For example, officials from a northeastern sector said that a lack of available staff time was one of the most significant limitations to utilizing MSRAM. These officials stated that they would like to have dedicated MSRAM personnel to develop the tool and make it useful on a daily basis. They added that even though MSRAM had many capabilities, they were unable to use it to its full capability because their port security specialist—the primary user of MSRAM—was busy with other programs, such as the Port Security Grant Program. Each of the port security specialists from the three districts we interviewed—which encompass 15 sectors over the West Coast, East Coast, Gulf Coast, and Mississippi River area—echoed the challenges with the level of sector resources for MSRAM. For example, one district official stated that although Coast Guard headquarters has dedicated MSRAM staff, there are no full-time MSRAM analysts at the sector level. He added that each sector would need a dedicated person for MSRAM and risk analysis to bring MSRAM analysis into operational and tactical decision making. Complexity of the MSRAM tool. Officials from 14 sectors (40 percent) reported that MSRAM use has been limited because data outputs require a substantial degree of analysis to use in decision making, or because the MSRAM tool itself is not easy to use. Some of the challenges raised by sectors that contribute to the complexity of the tool and interpreting its outputs included keeping abreast of yearly changes to the MSRAM tool and bridging knowledge gaps that occur when staff familiar with MSRAM rotate or leave the sector. In its MSRAM core document, the Coast Guard recognized that the frequent rotation of active duty personnel presents a risk to both the consistency of the MSRAM risk scoring efforts and the application of risk results. Competing mission demands and resource constraints. Officials from 14 sectors (40 percent) reported that competing mission demands or resource constraints limited the use of MSRAM. Specifically, officials from 11 sectors reported that MSRAM’s usefulness was limited by the fact that it only considers risk in the PWCS mission, which is 1 of the Coast Guard’s 11 statutorily required missions. For example, a Great Lakes sector told us that while MSRAM identifies the risks in the sector, the sector is limited in its ability to move assets to address those security risks because the assets are also fulfilling other Coast Guard mission requirements, such as search and rescue. Additionally, officials from 6 sectors said that limited resources, such as boats or personnel, constrained their sectors’ ability to address the risks identified by MSRAM. For example, officials from 2 inland river sectors said that MSRAM identifies their security risks and demonstrates where they should patrol and plan for special events, but that they do not have the resources to carry out the plans. Further, officials from 1 of the inland river sectors added that their response boats are often busy escorting the Army Corps of Engineers or engaged in flood relief efforts. This leaves the work of security patrols to the local harbor patrol, which the officials said does not have the same capabilities, in terms of boats and weapons, as the Coast Guard. Other challenges. Sector officials also identified other challenges with using MSRAM for informing decision making. Specifically, officials from 16 sectors (45 percent) said that MSRAM would be more useful if it was linked to other Coast Guard data systems, such as the Coast Guard’s inspections database, or if MSRAM was integrated into the sector command center. For example, officials from an east coast sector told us that they would like to see MSRAM linked to other databases in the sector command center, such as the Coast Guard’s vessel tracking system. Similarly, officials from a west coast sector said that integrating MSRAM into the Coast Guard’s inspections database would keep MSRAM continually updated and reflective of inspection results. Further, the command center has to consider other mission response needs, such as for pollution incidents or search and rescue, among others, and if MSRAM was integrated into the sector command center it could be used more in day-to-day operations. In addition, officials from 5 sectors noted that MSRAM does not capture dynamic risk, which limits its ability to inform daily decisions at the sector level. For instance, officials from a Gulf Coast sector said that they did not use MSRAM on a daily basis to allocate resources because daily fluctuations in vessel and barge risk are their greatest concern and this risk is not currently captured in MSRAM. The sectors that raised these issues believed that linking MSRAM into other data systems, integrating MSRAM into the command center, and having MSRAM account for dynamic risks could contribute to making its data more accurate, robust, and useful for decision making. Coast Guard headquarters officials told us that they were aware of the challenges field-level MSRAM users were facing and have taken some steps to address them, but providing additional training could help integrate MSRAM throughout sector decision making. The Coast Guard’s current actions to address MSRAM user challenges include assessing the feasibility of adding additional risk analyst staff, increasing the data’s usability, developing decision-supporting modules, and providing training. These actions are described below. Examining the feasibility of dedicated risk analysts. Presently, there is no dedicated risk analyst or MSRAM analyst position at the sector level, but headquarters officials told us in June 2011 that they are examining the feasibility of assigning additional port security specialists to the field and submitted a resource proposal for the additional staff. According to a senior Coast Guard budget official, given competing priorities and a constrained resource environment, it is unclear when or if this resource proposal will be funded. Deploying MSRAM to sector command centers. To help make MSRAM more dynamic and increase its usability, the Coast Guard is piloting an Enterprise Geographic Information System (EGIS) display for sector command centers, which layers facility and vessel locations onto a satellite-based map and visually displays changing risk as vessels move into and out of ports. Officials from 7 sectors that participated in or were familiar with the initial EGIS test group reported that the functionality was very useful and had the potential to substantially increase MSRAM’s use for sector risk management efforts. In addition, headquarters officials told us in June 2011 that efforts were under way to integrate MSRAM into the Coast Guard’s inspections database, which would allow MSRAM to be continually updated and reflective of year-round facility and vessel inspection results. Developing risk management modules. To assist with incorporating risk assessment information into decision making, in the fall of 2008, the Coast Guard began developing risk management modules within MSRAM that are able to provide specific types of analyses, such as comparing alternative security strategies. We asked officials from all 35 sectors their views on four modules—the Alternatives Evaluation Module, the Simplified Reporting Interface, the Daily Risk Profile, and the Risk Management Module. Sectors had mixed views on the utility of these modules. Specifically, officials from 14 sectors (40 percent) found the Alternatives Evaluation module very useful and cited such uses as evaluating Port Security Grant Program proposals and planning security for special events, and officials from 15 sectors (42 percent) found the Simplified Reporting Interface very useful for communicating risk information to port partners. However, with respect to the other two modules—the Daily Risk Profile and Risk Management Module—officials from 2 sectors (5 percent) found the Daily Risk Module very useful and officials from 3 sectors (8 percent) found the Risk Management Module very useful. For both modules, officials from 18 sectors (51 percent) reported that either they had not seen them or they were aware of the modules but did not have the time or training, among other reasons, to use them. Many of the modules are new and headquarters and some sector officials reported that they expected the modules would be more useful in the future as sectors gained familiarity with them through additional exposure and the annual MSRAM training. Providing training. While the Coast Guard offers annual MSRAM training, officials from 25 sectors (71 percent) identified areas of the training for improvement, which the Coast Guard could do more to address. Specifically, officials from these sectors said that increasing the number of people who take MSRAM training, providing MSRAM training to command-level staff or senior management, and offering training on how to conduct risk analysis to inform decision making, among other things, would help integrate MSRAM throughout sector decision- making processes. Since MSRAM is a collateral duty, MSRAM training is not part of any Coast Guard personnel’s required training curriculum. However, Coast Guard guidance from May 2011 states that area, district, and sector commanders are responsible for ensuring that adequate numbers of appropriate personnel are trained in MSRAM. Only one sector did not, at the time of our interview, have at least one staff person trained in MSRAM. Officials from a Gulf Coast sector said that the training provided on the MSRAM tool itself is good, but the training does not teach the skills needed to make decisions in the field. Officials from a Great Lakes sector suggested that the Coast Guard develop an advanced course on how to use MSRAM to inform operational decisions. Officials from a southeastern sector added that the Coast Guard provides guidance on how to assess risks using MSRAM, but needs to provide more training on how to communicate MSRAM results and how those results can be used. In addition, a sector commanding officer who participated in one of our interviews told us that he was provided minimal training on MSRAM and wanted to understand more about how it can be used to support command-level decisions. MSRAM has the capability of informing operational, tactical, and resource allocation decisions at all levels of a sector, but the Coast Guard has generally provided MSRAM training to a limited number of sector staff with specific MSRAM risk assessment responsibilities, such as port security specialists, rather than sector staff who may have command or management responsibilities where MSRAM may apply. Coast Guard headquarters officials said that this was because of limited resources to provide training for numerous sector personnel and variations in how MSRAM responsibilities are managed at different sectors. Standards for Internal Control in the Federal Government states that effective management of an organization’s workforce is essential to achieving results. Further, only when the right personnel for the job are on board and are provided the right training and tools, among other things, is operational success possible. To this end, management should ensure that training is aimed at developing and retaining employee skill levels to meet changing organizational needs. Coast Guard headquarters officials agree that providing MSRAM training to additional sector staff, particularly those with command and management responsibilities, would be valuable. Such training on how MSRAM can be used at all levels of command for risk-informed decision making—including how MSRAM can assist with the selection of different types of security measures to address areas of risk and the evaluation of their impacts—could further the Coast Guard’s efforts to implement its risk management framework and meet its goal to institutionalize MSRAM as the risk management tool for maritime security. The Coast Guard developed a performance measure and supporting model to measure and report its overall performance in reducing maritime security risk. This measure identifies the percentage reduction of maritime security risk, subject to Coast Guard influence, resulting from various Coast Guard actions. The Coast Guard considers this performance measure its key outcome measure for its PWCS mission. According to DHS’s Risk Management Fundamentals and the NIPP, it is crucial that a process of performance measurement be established to evaluate whether actions taken ultimately achieve the intended performance objective, such as reducing risk. This is important not only in evaluating program performance but also in holding the organization accountable for progress. We have also previously reported on the importance of developing outcome-based performance goals and measures as part of results management efforts. From fiscal years 2006 to 2010, the Coast Guard annually reported reducing from 15 to 31 percent of the maritime risk it is responsible for, in each year either meeting or exceeding its target. For fiscal years 2011 and 2012, the Coast Guard’s planned performance targets are to reduce more than 44 percent of the maritime security risk for which it is responsible. To measure how its actions have reduced risk, the Coast Guard developed a model that uses a two-step approach. The first step is to estimate the total amount of terrorism risk that exists in the maritime domain, in the absence of any Coast Guard activities. This is referred to as raw risk, and this information comes primarily from MSRAM. second step relies on an elicitation process whereby Coast Guard subject matter experts estimate how various security activities and operations, maritime domain awareness programs, and regulatory structures— referred to by the Coast Guard as regimes—that the Coast Guard has implemented have reduced risk to U.S. ports and waterways. This step involves Coast Guard subject matter experts assessing the probability of these Coast Guard efforts failing to prevent a successful terrorist attack for 16 potential maritime terrorist attack scenarios. Information also comes from DHS’s Risk Analysis Process for Informed Decision Making (RAPID) project, which is designed to provide strategic planning guidance and support resource allocation decisions at the DHS level. According to DHS’s Risk Management Fundamentals, elicitations involve using structured questions to gather information from individuals with in-depth knowledge of specific areas or fields. missions, such as search and rescue, there is not a rich historical data set of maritime terrorism incidents that the Coast Guard can use to measure its actual performance. In other words, in the absence of an actual domestic maritime terrorism event, the Coast Guard uses internal subject matter experts to estimate risk reduction as a proxy measure of performance—an attempt to measure performance against a terrorism incident that did not occur. The Coast Guard’s efforts to develop an outcome measure to quantify the impact its actions have had on risk is a positive step. However, the use of the measure has been limited, and even with recent improvements, the Coast Guard faces challenges using this measure to inform decision making. Performance goals and measures are intended to provide Congress and agency management with information to systematically assess a program’s strengths, weaknesses, and performance. Thus, measures should provide information for management decision making. Coast Guard officials explained that the primary purpose of the risk reduction measure has been for external performance reporting, and to a more limited extent for informing strategic decision making and for conducting internal analysis of performance to identify areas for improvement. Specifically, officials said the measure has been used to compare risk across maritime terrorism scenarios and compare those results to other studies and analysis on maritime terrorism scenarios, which provided information on whether PWCS activities were appropriately balanced to address those risks. However, Coast Guard officials stated that over time, internal and external reviews identified limitations in the risk reduction measure, such as not allowing for comparisons of performance across sectors. Recognizing these limitations, in 2010, the Coast Guard made improvements to the risk reduction model intended to enhance its utility for management decision making and to provide a more accurate measure of risk reduction. For example, the updated model includes information on the locations of Coast Guard assets and potential targets, which can be used to calculate the probability that Coast Guard assets will be able to intercept attacks. The Coast Guard also improved the elicitation techniques by which subject matter experts provided their estimates of Coast Guard risk reduction performance, and expanded the size and diversity of the subject matter experts involved in the elicitation According to Coast Guard officials, these improvements have process. made the measure and supporting model more useful for informing strategic decisions by allowing, for example, the ability to calculate risk reduction at the sector, district, area, and national levels and the risk reduction value of each element of the Coast Guard’s strategy. In other words, the updated model is able to show the risk reduction value of Coast Guard operational assets, such as small boats or helicopters, compared with regime activities, such as regulation enforcement. This information can help inform resource allocation decisions because it could identify which actions provide the greatest risk-reduction, according to these officials. The Coast Guard plans to use the updated model to measure its performance in reducing risk for the 2011 fiscal year. According to the Coast Guard, in 2009 a total of 26 subject matter experts were used, mostly from headquarters. In 2010, a total of 46 subject matter experts were used coming from headquarters, areas, districts, sectors, and operational units. making. For example, given the inherent uncertainties in estimating risk reduction, it is unclear if a measure of risk reduction would provide meaningful performance information for tracking progress against goals and performance over time. According to our performance measurement criteria, to be able to assess progress toward the achievement of performance goals, the measures used must be reliable and valid. Reliability refers to the precision with which performance is measured, while validity is the extent to which the measure adequately represents actual performance. Therefore, the usefulness of agency performance information depends to a large degree on the reliability of performance data. We have also reported that decision makers must have assurance that the program data being used to measure performance are sufficiently reliable and valid if the data are to inform decision making. Although the Coast Guard has taken steps to improve the quality of the supporting model to provide a more accurate measure, estimating risk reduction is inherently uncertain and this measure is based on largely subjective judgments of Coast Guard personnel, and therefore the risk reduction results reported by the Coast Guard are not based on measurable or observable activities. As a result, it is difficult to independently verify or assess the validity or appropriateness of the judgments or to determine if this is an accurate measure of Coast Guard performance in the PWCS mission. However, Coast Guard officials told us that they believe these reported results provide a useful proxy measure of Coast Guard performance, and noted that this is one of several metrics the Coast Guard uses to assess performance in the PWCS mission. According to DHS’s Risk Management Fundamentals, it is also important to be transparent about assumptions and key sources of uncertainty, so that decision makers are informed of the limitations of the risk information provided by the model. In its 2009 review of the risk reduction model, CREATE at the University of Southern California stated that it seemed likely that the model ignored important uncertainties and implied incorrectly high precision of risk estimates. Furthermore, OMB’s Updated Principles for Risk Analysis notes that because of the inherent uncertainties associated with estimates of risk, presentation of a single risk estimate may be misleading and provide a false sense of precision. OMB suggests that when a quantitative characterization of risk is provided, a range of plausible risk estimates should also be provided. From fiscal years 2006 to 2010, the Coast Guard reported the risk reduction measure as a specific risk reduction number rather than as a range of plausible risk reduction estimates. The Coast Guard official responsible for this measure told us this was because the previous risk reduction model was not capable of producing a range of plausible risk reduction estimates. The official noted that while the new risk reduction model—which will be used to report results for fiscal year 2011—is capable of producing a range of estimated risk reduction, the Coast Guard will continue to report the risk reduction measure as a single number because the DHS data system for performance reporting does not accept ranges—only numerical values. However, the official added that there is value in reporting a range of risk reduction and officials are considering a transition to a range of estimated reduction for the PWCS mission in future years. One alternative could be to report the percentage of risk reduced as a single number, but having an explanatory note indicating the range of plausible risk reduction estimates. Using a risk reduction measure that more accurately reflects performance effectiveness can give Coast Guard leaders and Congress a better sense of progress toward goals, which can support efforts to identify areas for improvement. DHS officials have also raised some questions about the risk reduction measure. Recently, DHS determined that the Coast Guard’s risk reduction measure was not appropriate for inclusion as a DHS strategic performance measure and has designated it as a management measure. According to DHS, a strategic measure is designed to communicate achievement of strategic goals and objectives and be readily understandable to the public, and a management measure is designed to gauge program results and tie to resource requests and be used to support achievement of strategic goals. According to a senior DHS official, in 2010, DHS leadership reviewed all existing department measures and made decisions about which measures they believed were clearly tied to the DHS Quadrennial Homeland Security Review missions and were easily understandable by the public. This official noted that based on this review, DHS leadership did not feel the risk reduction measure and its methodology would be easily understandable by the public and therefore did not designate the measure as a strategic measure. As a result, the risk reduction measure will not be included in DHS’s annual performance plan, formally published with the Annual Performance Report, because this report only includes the smaller set of strategic measures. However, this official noted that the risk reduction measure is important as one piece of information to manage risk and is considered to be part of the full suite of DHS performance measures, and will continue to be published in the Coast Guard’s strategic context that is submitted with DHS’s Annual Performance Report. The Coast Guard has invested substantial effort incorporating risk management principles into its security priorities and investments, and continues to proactively strengthen its assessment, management, and evaluation practices. As a result, the Coast Guard’s risk assessments and risk model are generally sound and in alignment with DHS standards. However, there are some additional actions that the Coast Guard could take to further its risk management approach by facilitating a wider use of risk information and making the results more valuable to the users. For example, since risk management is a tool for informing policymakers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty, the Coast Guard could better document and communicate the uncertainty or confidence levels of its risk assessment results, including any implications that the uncertainty may have for decision makers. This added information would allow Coast Guard decision makers to prioritize strategies, tactics, and long-term investments with greater insight about the range of likely results and associated trade-offs with each decision. Additional information would also allow external reviewers of the risk model to reach the most appropriate conclusions or provide the most useful improvement recommendations through periodic reviews. The Coast Guard could also enhance the risk-informed prioritization of its field-level strategies, operations, and tactics by ensuring that risk management training is expanded to multiple levels of Coast Guard decision makers at the sector level, including command-level personnel. Expanding training on how MSRAM could be used at all levels of command for risk-informed decision making—including how MSRAM can assist with the selection of different types of security measures and the evaluation of their impacts—would further the Coast Guard’s efforts to implement its risk management framework and meet its goal of institutionalizing MSRAM as the risk management tool for maritime security. Finally, accurately representing performance results is important and the Coast Guard could more accurately convey its risk reduction performance measure by reporting risk reduction results as a range rather than a point estimate. Presenting risk reduction as a single number without a corresponding range of uncertainty could hamper Coast Guard efforts to identify areas for improvement. Taking these steps would make the Coast Guard’s risk management approach even stronger. To help the Coast Guard strengthen MSRAM and better align it with NIPP risk management guidance, as well as facilitate the increased use of MSRAM across the agency, we recommend that the Commandant of the Coast Guard take the following three actions: (1) Provide more thorough documentation related to key assumptions and sources of uncertainty within MSRAM and inform users of any implications for interpreting the results from the model. (2) Make MSRAM available to appropriate parties for additional external peer review. (3) Provide additional training for sector command staff and others involved in sector management and operations on how MSRAM can be used as a risk management tool to inform sector-level decision making. To improve the accuracy of the risk reduction measure for internal and external decision-making, we recommend that the Commandant of the Coast Guard take action to report the results of the risk reduction measure as a range rather than a point estimate. We provided a draft of this report to DHS and the Coast Guard on October 17, 2011, for review and comment. DHS provided written comments, which are reprinted in appendix II. DHS and the Coast Guard concurred with the findings and recommendations in the report, and stated that the Coast Guard is taking actions to implement our recommendations. The Coast Guard concurred with our first recommendation that it provide more thorough documentation related to key assumptions and sources of uncertainty within MSRAM. Specifically, the Coast Guard stated that the documentation of uncertainty is part of the ongoing MSRAM VV&A process, and that the Coast Guard will continue to work with the DHS Office of Risk Management and Analysis in developing a feasible and deployable model that will benefit field-level security operations. These actions should improve the Coast Guard’s ability to document and inform MSRAM users of any implications for interpreting results from the model, thereby addressing the intent of our recommendation. Regarding the second recommendation that the Coast Guard make MSRAM available to appropriate parties for additional external peer review, the Coast Guard concurred. The Coast Guard stated that external peer review is part of the ongoing MSRAM VV&A process, and that additional external peer review will be part of an independent verification and validation of MSRAM expected to be completed in the fall of 2012. Such actions should address the intent of the recommendation. Regarding the third recommendation that the Coast Guard provide additional training for sector command staff and others involved in sector management on how MSRAM can be used as a risk management tool, the Coast Guard concurred. Specifically, the Coast Guard stated that MSRAM is part of the Coast Guard’s contingency planning course, and the Coast Guard will explore other opportunities to provide risk training to sector command staff, including online and webinar training opportunities. Such actions, once implemented, should address the intent of the recommendation. Finally, the Coast Guard also concurred with the fourth recommendation to take action to report the results of the risk reduction measure as a range rather than a point estimate. The Coast Guard stated that it is currently limited by the DHS data reporting system with regard to the format of presenting performance targets and results, but noted that it is currently working with DHS to determine options for reporting risk as a range. Such action, when fully implemented, should address the intent of the recommendation. DHS and the Coast Guard also provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix III. To provide guidance to agency decision makers, we developed a risk management framework which is intended to be a starting point for applying risk-informed principles. Our risk management framework, shown in figure 2, entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Setting strategic goals, objectives, and constraints is a key first step in applying risk management principles and helps to ensure that management decisions are focused on achieving a purpose. Risk assessment, an important element of a risk-informed approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative determination, quantitative determination, or both of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application involves assessing three key components—threat, vulnerability, and consequence. A threat assessment is the identification and evaluation of adverse events that can harm or damage an asset. A vulnerability assessment identifies weaknesses in physical structures, personal protection systems, processes, or other areas that may be exploited. A consequence assessment is the process of identifying or evaluating the potential or actual effects of an event, incident, or occurrence. Information from these three assessments contributes to an overall risk assessment that characterizes risks, which can provide input for evaluating alternatives and prioritizing security initiatives. The risk assessment element in the overall risk management cycle informs each of the remaining steps of the cycle. Alternatives evaluation addresses the evaluation of risk reduction methods by consideration of countermeasures or countermeasure systems and the costs and benefits associated with them. Management selection addresses such issues as determining where resources and investments will be made, the sources and types of resources needed, and where those resources would be targeted. The next phase in the framework involves the implementation of the selected countermeasures. Following implementation, monitoring is essential to help ensure that the entire risk management process remains current and relevant and reflects changes in the effectiveness of the alternative actions and the risk environment in which it operates. Program evaluation is an important tool for assessing the efficiency and effectiveness of the program. As part of monitoring, consultation with external subject area experts can provide a current perspective and an independent review in the formulation and evaluation of the program. The National Infrastructure Protection Plan (NIPP), originally issued by the Department of Homeland Security (DHS) in 2006 and updated in 2009, includes a risk analysis and management framework, which, for the most part, mirrors our risk management framework. This framework includes six steps—set goals and objectives; identify assets, systems, and networks; assess risks; prioritize; implement programs; and measure effectiveness. The NIPP is DHS’s base plan that guides how DHS and other relevant stakeholders should use risk management principles to prioritize protection activities. In 2009, DHS updated the NIPP to, among other things, increase its emphasis on risk management, including an expanded discussion of risk management methodologies and discussion of a common risk assessment approach that provided core criteria for these analyses. Beyond the NIPP, DHS has issued additional risk management guidance and directives. For example, in January 2009 DHS published its Integrated Risk Management Framework, which, among other things, calls for DHS to use risk assessments to inform decision making. In April 2011, DHS issued its Risk Management Fundamentals, which establishes specific doctrine and guidance for risk management across DHS. In addition to the contact named above, Dawn Hoff, Assistant Director and Adam Hoffman, Analyst-in-Charge, managed this assignment. Chuck Bausell, Charlotte Gamble, and Grant Sutton made significant contributions to this report. Colleen McEnearney provided assistance with interviews and data analysis. Michele Fejfar assisted with design, methodology, and data analysis. Jessica Orr provided assistance with report development, and Geoff Hamilton provided legal assistance. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Maritime Security: Progress Made but Further Actions Needed to Secure the Maritime Energy Supply. GAO-11-883T. Washington, D.C.: August 24, 2011. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Maritime Security: Varied Actions Taken to Enhance Cruise Ship Security, but Some Concerns Remain. GAO-10-400. Washington, D.C.: April 9, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington, D.C.: March 27, 2009. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington D.C.: March 31, 2004. Managing for Results: Challenges Agencies Face in Producing Credible Performance Information. GAO/GGD-00-52. Washington, D.C.: February 4, 2000. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans. GAO/GGD-10.1.20. Washington, D.C.: April 1998.
Since the terrorist attacks of September 11, 2001, the nation's ports and waterways have been viewed as potential targets of attack. The Department of Homeland Security (DHS) has called for using risk-informed approaches to prioritize its investments, and for developing plans and allocating resources that balance security and the flow of commerce. The U.S. Coast Guard--a DHS component and the lead federal agency responsible for maritime security--has used its Maritime Security Risk Analysis Model (MSRAM) as its primary approach for assessing and managing security risks. GAO was asked to examine (1) the extent to which the Coast Guard's risk assessment approach aligns with DHS risk assessment criteria, (2) the extent to which the Coast Guard has used MSRAM to inform maritime security risk decisions, and (3) how the Coast Guard has measured the impact of its maritime security programs on risk in U.S. ports and waterways. GAO analyzed MSRAM's risk assessment methodology and interviewed Coast Guard officials about risk assessment and MSRAM's use across the agency. MSRAM generally aligns with DHS risk assessment criteria, but additional documentation on key aspects of the model could benefit users of the results. MSRAM generally meets DHS criteria for being complete, reproducible, documented, and defensible. Further, the Coast Guard has taken actions to improve the quality of MSRAM data and to make them more complete and reproducible, including providing training and tools for staff entering data into the model. However, the Coast Guard has not documented and communicated the implications that MSRAM's key assumptions and other sources of uncertainty have on MSRAM's risk results. For example, to assess risk in MSRAM, Coast Guard analysts make judgments regarding such factors as the probability of an attack and the economic and environmental consequences of an attack. These multiple judgments are inherently subjective and constitute sources of uncertainty that have implications that should be documented and communicated to decision makers. Without this documentation, decision makers and external MSRAM reviewers may not have a complete understanding of the uses and limitations of MSRAM data. In addition, greater transparency and documentation of uncertainty and assumptions in MSRAM's risk estimates could also facilitate periodic peer reviews of the model--a best practice in risk management. MSRAM is the Coast Guard's primary tool for managing maritime security risk, but resource and training challenges hinder use of the tool by Coast Guard field operational units, known as sectors. At the national level, MSRAM supports Coast Guard strategic planning efforts, which is consistent with the agency's intent for MSRAM. At the sector level, MSRAM has informed a variety of decisions, but its use has been limited by lack of staff time, the tool's complexity, and competing mission demands, among other things. The Coast Guard has taken actions to address these challenges, but providing additional training on how MSRAM can be used at all levels of sector decision making could further the Coast Guard's risk management efforts. MSRAM is capable of informing operational, tactical, and resource allocation decisions, but the Coast Guard has generally provided MSRAM training only to a small number of sector staff who may not have insight into all levels of sector decision making. The Coast Guard developed an outcome measure to report its performance in reducing maritime risk, but has faced challenges using this measure to inform decisions. Outcome measures describe the intended result of carrying out a program or activity. The measure is partly based on Coast Guard subject matter experts' estimates of the percentage reduction of maritime security risk subject to Coast Guard influence resulting from Coast Guard actions. The Coast Guard has improved the measure to make it more valid and reliable and believes it is a useful proxy measure of performance, noting that developing outcome measures is challenging because of limited historical data on maritime terrorist attacks. However, given the uncertainties in estimating risk reduction, it is unclear if the measure would provide meaningful performance information with which to track progress over time. In addition, the Coast Guard reports the risk reduction measure as a specific estimate rather than as a range of plausible estimates, which is inconsistent with risk analysis criteria. Reporting and using outcome measures that more accurately reflect mission effectiveness can give Coast Guard leaders and Congress a better sense of progress toward goals. GAO recommends that the Coast Guard provide more thorough documentation on MSRAM's assumptions and other sources of uncertainty, make MSRAM available for peer review, implement additional MSRAM training, and report the results of its risk reduction performance measure in a manner consistent with risk analysis criteria. The Coast Guard agreed with these recommendations.
While high-speed passenger rail has been in operation in Europe and Asia for several decades, it is in its relative infancy in the United States. The Passenger Rail Investment and Improvement Act of 2008 (PRIIA) called for development of high-speed rail corridors in the United States and led to establishment of the HSIPR program. FRA administers the HSIPR program as a discretionary grant program to states and others. This program was appropriated $8 billion in funding from the American Recovery and Reinvestment Act (Recovery Act) in 2009 and an additional $2.5 billion in funding from the fiscal year 2010 DOT Appropriations Act. According to FRA, as of October 2012, about $9.9 billion has been obligated for 150 projects. The California high-speed rail project is the largest recipient of HSIPR funds, with approximately $3.5 billion (about 35 percent of program funds obligated). We have previously reported on high-speed rail and the HSIPR program. For example, in March 2009 we reported on the challenges associated with developing and financing high-speed rail projects. These included securing the up-front investments for such projects and sustaining public and political support and stakeholder consensus. We concluded that whether any high-speed rail proposals are eventually built hinges on addressing the funding, public support, and other challenges facing these projects. In June 2010, we reported that states would be the primary recipients of Recovery Act funds for high-speed rail, but many states did not have rail plans that would, among other things, establish strategies and priorities of rail investments in a particular state. California’s high-speed rail project is poised to be the first rail line in the United States designed to operate at speeds greater than 150 miles per hour. The planned 520-mile line will operate between San Francisco and Los Angeles at speeds up to 220 miles per hour (see fig.1). At an estimated cost of $68.4 billion, it is also one of the largest transportation infrastructure projects in the nation’s history. The project’s planning began in 1996 when the Authority was created but began in earnest after initial funding was approved in 2008 with the passage of Proposition 1A, which authorized $9.95 billion in state bond funding for construction of the high- speed rail system and improvements to connections (see fig. 2). Construction is expected to occur in phases beginning with the 130-mile first construction segment from just north of Fresno, California, to just north of Bakersfield, California. In July 2012, the California legislature appropriated $4.7 billion in state bond funds. The process of acquiring property for the right-of-way and construction is expected to begin soon. Request for proposals to select construction contractors and right-of-way acquisitions were issued in March and September 2012, respectively. According to the Authority, a design-build contract for the first construction segment is expected to be awarded in June 2013 with construction potentially commencing no earlier than summer 2013. The project underwent substantial revision earlier this year after the Authority issued its November 2011 draft business plan in response to the initial high cost and other criticisms. Most significantly, the Authority scaled back its plans to build dedicated high-speed rail lines over its entire length. Instead, the April 2012 revised business plan adopted a “blended” system in which high-speed rail service would be provided over a mix of dedicated high-speed lines and existing and upgraded local rail infrastructure (primarily at the bookends of the system on the San Francisco peninsula and in the Los Angeles basin). This change was made, in part, to respond to criticism that the cost of the full-build system contained in the November 2011 draft business plan—$98.5 billion—was too high. The revised cost in the April 2012 plan was $68.4 billion. In addition, the ridership and revenue forecasts in the April 2012 revised business plan reflected a wider uncertainty range than the forecast presented in the November 2011 plan. For example, in the November 2011 draft business plan, the Authority estimated 2030 ridership to be between 14.4 million and 21.3 million passengers and annual revenues of the high speed rail system to be between $1.05 billion and $1.56 billion. This range increased in the April 2012 revised business plan, to between 16.1 million and 26.8 million passengers and annual revenues to be between $1.06 billion and $1.81 billion. The Authority attributed the increase in the uncertainty range to additional conservatism in the low ridership estimate and the ridership changes to several factors such as the adoption of the blended approach which, among other things, allows one-seat service from San Francisco to Los Angeles to begin sooner than the original full-build approach. However, over time ridership forecasts under the blended approach are less than the original full-build approach. To date, the state of California and the federal government have committed funding to the project. In July 2012, the California state legislature appropriated approximately $4.7 billion dollars in Proposition 1A bond funds, including $2.6 billion for construction of the high-speed rail project and $1.1 billion for upgrades in the bookends. The federal government has also obligated $3.3 billion in HSIPR grant funds. Most of the HSIPR money awarded to the project was appropriated under the Recovery Act and in accordance with governing grant agreements must be expended by September 30, 2017. In addition, approximately $945 million in fiscal year 2010 funding was awarded to the project by FRA and is to remain available until expended. The Authority estimates that the high-speed rail project in California will cost $68.4 billion to construct and hundreds of millions of dollars to operate and maintain annually. Since the project is relying on significant investments of state and federal funds—and, ultimately private funds—it is vital that the Authority, FRA, and Congress be able to rely on these estimates for the project’s funding and oversight (see table 1 below for a summary of the sources of funding). GAO’s Cost Guide identifies best practices that help ensure that a cost estimate is comprehensive, accurate, well documented, and credible.  A comprehensive cost estimate ensures that costs are neither omitted nor double counted.  An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs.  A well-documented estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference.  A credible estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. These four characteristics help minimize the risk of cost overruns, missed deadlines, and unmet performance targets. Our past work on high-speed rail projects around the world has shown that projects’ cost estimates tend to be underestimated. As such, it is important to acknowledge the potential for this bias and ensure that cost estimates are as reliable as possible. Based on our ongoing review, we have found that the Authority’s cost estimates exhibit strengths and weaknesses. The quality of any cost estimate can always be improved as more information becomes available. And based in part on evaluations from the Peer Review Group, the Authority is taking some steps to improve the cost estimates that will be provided in the 2014 business plan. The Authority followed best practices in the Cost Guide to ensure comprehensiveness, but also exhibited some shortcomings. The cost estimates include the major components of the project’s construction and operating costs. The construction cost estimate is based on detailed construction unit costs that are, in certain cases, more detailed than the cost categories required by FRA in its grant applications. However, the operating costs were not as detailed as the capital costs, as over half of the operating costs are captured in a single category called Train Operations and Maintenance. In addition, the Authority did not clearly describe certain assumptions underlying both cost estimates. For example, Authority officials told us that the California project will rely on proven high-speed rail technology from systems in other countries, but it is not clear if the cost estimates were adjusted to account for any challenges in applying the technology in California. The Authority took a number of steps to develop accurate cost estimates consistent with best practices in the Cost Guide. The estimates have been updated to reflect the new “blended” system which will rely, in part, on existing rail infrastructure; they are based on a dataset of costs to construct comparable infrastructure projects; they contain few, if any, mathematical errors; and they have been adjusted for inflation. For example, the Authority’s contractor used a construction industry database of project costs supplemented with actual bid-price data from similar infrastructure projects. However, the cost estimates used in the April 2012 revised business plan do not represent final design and route alignments, and the estimates will change as the project moves into construction and operation. The Authority did not produce a risk and uncertainty analysis of its cost estimates that would help anticipate the impact of these changes. The Cost Guide recommends conducting a risk and uncertainty analysis to determine the primary risk factors and assess the likelihood that they may occur, helping to ensure that the estimate is neither overly conservative nor optimistic. The Authority followed some, but not all, best practices in the Cost Guide to ensure that the cost estimate is well documented. In many cases, the methodologies used to derive the construction cost estimates were well documented, but in other cases the documentation was more limited. For example, while track infrastructure costs were thoroughly documented, costs for other elements, such as stations and trains, were supported with little detail or no documentation. Additionally, in some cases where the methodologies were documented, we were unable to trace the estimates back to their source data and recreate the estimates using the stated methodology. For example, we were unable to identify how the operating costs from analogous high-speed rail projects were adjusted for the California project. The Authority took some steps consistent with our Cost Guide to ensure the cost estimates’ credibility, but not with respect to some best practices. In order to make cost estimates credible, GAO’s Cost Guide recommends: testing such estimates with sensitivity analysis (making changes in key cost inputs),  a risk and uncertainty analysis (discussed above), and  an independent cost estimate conducted by an unaffiliated party to see how outside estimates compare to the original estimates. While the Authority performed a sensitivity analysis for the first 30 miles of construction and an independent cost estimate for the first 185 miles of construction in the Central Valley, neither covered the entire Los Angeles to San Francisco project. For the operating-cost estimate, the Authority conducted a sensitivity test under various ridership scenarios; however, this test was designed to measure the ability of the system to cover operating costs with ticket revenues and not to determine the potential risk factors that may affect the operating-cost estimate itself. The Authority also did not compare their operating-cost estimate to an independent cost estimate. Finally, as noted above, the Authority did not perform a risk and uncertainty analysis, which would improve the estimates’ credibility by identifying a range of potential costs and indicating the degree of confidence decision-makers, can place on the cost estimates. The Authority is taking steps to improve its cost estimates. To make its operating-cost estimate more comprehensive and better documented, the Authority has contracted with the International Union of Railways to evaluate the existing methodology and data and help refine its estimates. In addition, to improve the construction cost estimates, the Authority will have the opportunity to validate and enhance, if necessary, the accuracy of its cost estimates once actual construction package contracts are awarded for the initial construction in the Central Valley in 2013. The bids for the first 30-mile construction package are due in January 2013 and will provide a check on how well the Authority has estimated the costs for this work as well as provide more information on potential risks that cost estimates of future segments may encounter. In addition to challenges in developing reliable cost estimates, the California high-speed rail project also faces other challenges. These include obtaining project funding beyond the first construction segment, continuing to refine ridership and revenue estimates beyond the current forecasts, and addressing the potential increased risks to project schedules from legal challenges associated with environmental reviews and right-of-way acquisitions. One of the biggest challenges facing California’s high-speed rail project is securing funding beyond the first construction segment. While the Authority has secured $11.5 billion from federal and state sources for project construction, almost $57 billion in funding remains unsecured. A summary of funding secured to-date can be found in Table 1. As with other large transportation infrastructure projects, including high- speed rail projects in other countries, the Authority is relying primarily on public financial support, with $55 billion or 81 percent of the total construction cost, expected to come from state and federal sources. A summary of the Authority’s funding plan can be found in table 2. Of the total $55 billion in state and federal funding, about $38.7 billion are uncommitted federal funds, an average of over $2.5 billion per year over the next 15 years. Most of the remaining funding is from unidentified private investment once the system is operational—a model that has been used in other countries, such as for the High Speed One line in the United Kingdom. As a result of the funding challenge, the Authority is taking a phased approach—building segments as funding is available. However, given that the HSIPR grant program has not received funding for the last 2 fiscal years and that future funding proposals will likely be met with continued concern about federal spending, the largest block of expected funds is uncertain. The Authority has identified revenues from California’s newly implemented emissions cap and trade program in the event other funding is not made available, but according to state officials, the amounts and authority to use these funds are not yet established. Developing reliable ridership and revenue forecasts is difficult in almost every circumstance and for a variety of reasons. Chief among these are (1) limited data and information, (2) risks of inaccurate assumptions, and (3) accepted forecast methods vary. Although forecasting the future is inherently risky, reliable ridership and revenue forecasts are still critical components in estimating the economic viability of a high-speed rail project and in determining what project modifications, if any, may be needed. For example, the financial viability of California’s high-speed rail project depends on generating sufficient ridership to cover its operating expenses. Ridership and revenue forecasts enable policymakers and private entities to make informed decisions on policies related to the proposed high-speed rail system and to determine the risks associated with a high-speed rail project when making investment decisions. Addressing these challenges will be important for the Authority as it works toward updating its ridership and revenue forecasts for the 2014 business plan. Limited data and information, especially early in a project before specific service characteristics are known, make developing reliable ridership and revenue forecasts difficult. And to the extent early stage data and information are available, they need to be updated to reflect changes in the economy, project scope, and consumer preferences. For example, in developing the ridership and revenue forecasts for the April 2012 revised business plan, the Authority updated several assumptions and inputs used to develop the initial ridership and revenue forecasts that were presented in the November 2011 draft business plan. Authority officials said this update was done, in part, to build in additional conservatism in the ridership forecasts, in particular in the low scenario, and to avoid optimism bias. Among other updates, the Authority revised model assumptions to reflect changes in current and anticipated future conditions for airfares and airline service frequencies, decreases in gasoline price forecasts, and anticipated declines in the growth rates for population, number of households, and employment. Peer review groups, such as the Ridership and Revenue Peer Review Panel (Panel) established by the Authority, and academic reviewers have examined the Authority’s ridership and revenue forecast methodology. These reviewers recommended additional improvements to the model going forward. For example, in developing the forecasts used for the April 2012 revised business plan, the Authority relied on data from a 2005 survey that was conducted at airports, rail stations, and by telephone from August to November 2005. In a May 2012 report to the Authority, the Panel pointed out limitations with this data source and recommended that new data be collected to supplement the existing data for model enhancement purposes. Authority officials stated that they are currently developing a new revealed-preference and stated-preference survey to update the 2005 survey data and that they plan to begin collecting this new survey data in December 2012. Portions of the new 2012 data will be used to re-estimate and re-calibrate the ridership model to develop updated ridership and revenue forecasts for the 2014 business plan. The Authority also plans to develop a new version of the model that will make full use of the new 2012 survey data; however, the new model is not expected to be developed in time for the 2014 business plan. It will be important to complete these future model improvements as the project is developed. Risks of inaccurate forecasts are a recurring challenge for sponsors of the project. Research on ridership and revenue forecasts for rail infrastructure projects have shown that ridership forecasts are often overestimated and actual ridership is likely to be lower. For example, a recent study examined a sample of 62 rail projects and found that for 53 of them, the demand forecasts were overestimated and that actual demand was lower than forecasted demand. According to the Authority, the ridership and revenue forecasts, in its April 2012 revised business plan, include a wider range of ridership and revenue forecasts and lower ridership and revenue forecasts compared to earlier forecasts, to help mitigate the risks of optimism bias. In addition, the Authority performed a sensitivity analysis of an extreme downside scenario to test the ridership and revenue implications of a series of downside events coinciding, such as increased average rail-travel time from Merced to the San Fernando Valley and lower auto-operating costs. Based on this analysis, the Authority determined that an extreme downside scenario would be expected to reduce ridership and revenue forecasts by 27 percent and 28 percent, respectively, below that shown for the low forecasts in the April 2012 revised business plan. According to the Authority, these forecasts would still be sufficient to cover the Authority’s estimated operating costs and would not require a public operating subsidy. Authority officials stated that they intend to conduct additional sensitivity analyses going forward. Finally, accepted forecasting methods vary, and FRA has not established guidance on acceptable approaches to the development of reliable ridership and revenue forecasts. Industry standards vary, and FRA has established minimal requirements and guidance related to information HSIPR grant applicants must provide regarding forecasts. As we have previously reported, different ridership-forecasting methods may yield diverse and therefore uncertain results. As such, we have recommended that the Secretary of Transportation develop guidance and methods for ensuring reliability of ridership forecasts. Similarly, the DOT OIG has also recommended that FRA develop specific and detailed guidance for the preparation of HSIPR ridership and revenue forecasts. Best practices identified by various agencies and transportation experts have identified certain components of the ridership- and revenue- forecasting process that affect results more than others and that are necessary for developing reasonable forecasts. Among others, key components include processes for developing trip tables, developing a mode-choice model, conducting sensitivity analyses, and conducting validation testing. The Authority’s forecasts included each of these key components in developing the ridership and revenue forecasts for the April 2012 revised business plan. While addressing these components does not assure ridership and revenue forecasts are accurate, it does provide greater assurance that the Authority’s processes for developing these forecasts are reasonable. In our ongoing review of the California high speed rail project, we are evaluating the extent to which the Authority’s ridership and revenue forecasts followed best practices when completing each of these tasks. We will present the results of our assessment of the Authority’s process in our 2013 report on this subject. Among the other challenges facing the project, which may increase the risk of project delays, are potential legal challenges associated with the environmental laws. Under the National Environmental Policy Act (NEPA) and the California Environmental Quality Act (CEQA), government agencies funding a project with significant environmental effects are required to prepare environmental impact statements or reports (EIS/EIR) that describe these impacts. Under CEQA, an EIR must also include mitigation measures to minimize significant effects on the environment. The Authority is taking a phased approach to comply with NEPA and CEQA by developing EIS/EIRs for both the project as a whole as well as for particular portions of the project. To date, program level EIS/EIRs have been prepared for the project as a whole (August 2005) and for the Bay Area to Central Valley (initial certification by the Authority in July 2008 and a revised final EIS/EIR issued in April 2012). Project level EIS/EIRs have been prepared for the Merced-to-Fresno portion of the project (issued April 2012), and a draft EIS/EIR has been prepared for the Fresno-to-Bakersfield portion of the project (initial draft issued in August 2011 and revised final issued July 2012). Environmental concerns have been the subject of legal challenges. For example, a lawsuit was filed in October 2010 against the Authority challenging the decision to approve the Bay Area to Central Valley segment based on an EIR alleged to be inadequate. Several lawsuits have been filed and these cases are still pending. The project also faces the potential challenge of acquiring rights-of-way. Timely right-of-way acquisition will be critical since some properties will be in priority construction zones. Property to be acquired will include homes, businesses, and farmland. Not having the needed right-of-way could cause delays as well as add to project costs. Acquisition of right-of-way will begin with the first construction segment, which has been subdivided into 4 design-build construction packages. There are a total of approximately 1,100 parcels to be acquired for this segment; all of which are in California’s Central Valley. In September 2012, the Authority issued a Request for Proposals to obtain the services of one or more contractors to provide right-of-way and real property services. The Authority estimated in its April 2012 revised business plan that the purchase or lease of real estate for the phase I blended system will cost between $3.6 billion and $3.9 billion (in 2011 dollars). According to the Authority, the schedule for right-of-way acquisition will be phased, based on construction priorities with delivery of all required parcels in the Central Valley no later than spring 2016. Acquisition is anticipated to begin in February 2013. The timely acquisition of rights-of-way may be affected by at-risk properties—that is, those properties that the Authority considers at-risk for timely delivery to design-build contractors for construction. There could be a significant number of at-risk properties. For example, Authority officials told us there are about 400 parcels in the first construction package, about 200 of which are in priority construction zones. Of these, about 100 parcels (50 percent) are considered to be potentially at-risk for timely delivery. Since right-of- way acquisition has not yet begun, the extent that at-risk properties will ultimately affect project schedules or cost is not known. However, there may be an increased risk given the initial high percentage of at-risk parcels. Chairman Mica, Ranking Member Rahall, this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Committee may have at this time. For future questions about this statement, please contact Susan Fleming, Director, Physical Infrastructure, at (202) 512-2834 or flemings@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Paul Aussendorf, (Assistant Director), Russell Burnett, Delwen Jones, Richard Jorgenson, Jason Lee, James Manzo, Maria Mercado, Josh Ormond, Paul Revesz, Max Sawicky, Maria Wallace, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The California high-speed rail project is the single largest recipient of federal funding from the Federal Railroad Administration's (FRA) High Speed Intercity Passenger Rail (HSIPR) grant program. The 520-mile project (see map) would link San Francisco to Los Angeles at an estimated cost of $68.4 billion. Thus far, FRA has awarded $3.5 billion to the California project. The Authority has to continue to rely on significant public-sector funding, in addition to private funding, through the project's anticipated completion date in 2028. This testimony is based primarily on GAO's ongoing review of the California high-speed rail project and discusses GAO's preliminary assessment of (1) the reliability of the project's cost estimates developed by the Authority and (2) key challenges facing the project. As part of this review, we obtained documents from and conducted interviews with Authority officials, its contractors, and other state officials. GAO analyzed the extent to which project cost estimates adhered to best practices contained in GAO's Cost Estimating and Assessment Guide (Cost Guide), which identifies industry best practices to ensure cost estimates are comprehensive, accurate, well documented, and credible--the four principal characteristics of a reliable cost estimate. GAO also reviewed project finance plans as outlined in the Authority's April 2012 revised business plan. To identify key challenges, GAO reviewed pertinent legislation, federal guidelines and best practices related to ridership and revenue forecasting, and interviewed, among others, federal, state, and local officials associated with the project. Based on an initial evaluation of the California High Speed Rail Authority's (Authority) cost estimates, GAO found that they exhibit certain strengths and weaknesses when compared to best practices in GAO's Cost Guide. Adherence with the Cost Guide reduces the risk of cost overruns and missed deadlines. GAO's preliminary evaluation indicates that the cost estimates are comprehensive in that they include major components of construction and operating costs. However, they are not based on a complete set of assumptions, such as how the Authority expects to adapt existing high-speed rail technology to the project in California. The cost estimates are accurate in that they are based on the most recent project scope, include an inflation adjustment, and contain few mathematical errors. And while the cost estimates' methodologies are generally documented, in some cases GAO was unable to trace the final cost estimate back to its source documentation and could not verify how certain cost components, such as stations and trains, were calculated. Finally, the Authority evaluated the credibility of its estimates by performing both a sensitivity analysis (assessing changes in key cost inputs) and an independent cost estimate, but these tests did not encompass the entire cost estimate for the project. For example, the sensitivity analysis of the construction cost estimate was limited to 30 miles of the first construction segment. The Authority also did not conduct a risk and uncertainty analysis to determine the likelihood that the estimates would be met. The Authority is currently taking some steps to improve its cost estimates. The California high-speed rail project faces many challenges. Chief among these is obtaining project funding beyond the first 130-mile construction segment. While the Authority has secured $11.5 billion from federal and state sources, it needs almost $57 billion more. Moreover, the HSIPR grant program has not received federal funding for the last 2 fiscal years, and future federal funding is uncertain. The Authority is also challenged to improve its ridership and revenue forecasts. Factors, such as limited data and information, make developing such forecasts difficult. Finally, the environmental review process and acquisition of necessary rights-of-way for construction could increase the risk of the project's falling behind schedule and increasing costs.
Task Force Hawk deployed to Albania in April 1999 as part of Operation Allied Force. Originally, the task force was to deploy to the Former Yugoslav Republic of Macedonia. However, the government of Macedonia would not allow combat operations to be conducted from its territory. The United States subsequently obtained approval from the government of Albania to use its territory to base Task Force Hawk and conduct combat operations. (See fig. 1.) Albania did not have any previously established U.S. military base camps as Macedonia did and was not viewed as having a stable security environment. According to Army officials, the size of the Task Force had to be increased to provide more engineering capability to build operating facilities and provide force protection. The task force was a unique Army organization. It was comprised of 1 attack helicopter battalion with 24 Apache attack helicopters; 1 Corps aviation brigade with 31 support helicopters; 1 Multiple Launch Rocket System battalion with 27 launchers; a ground maneuver element for force protection; and other headquarters and support forces. (See fig. 2 for a picture of an Apache helicopter.) It ultimately totaled about 5,100 personnel. Its planned mission was to conduct deep attacks against Serbian military and militia forces operating in Kosovo using Apache helicopters and Multiple Launch Rocket Systems. The task force deployed to Albania and trained for the mission but was not ordered into combat. Ultimately, its focus changed to using its radar systems to locate enemy forces for targeting by other aircraft. Additionally, the task force assumed responsibility for the protection of all U.S. forces operating out of Tirana Airfield, its staging base, which included Air Force personnel providing humanitarian assistance to Kosovo refugees. Concerned about the combat readiness of Apache helicopters and their experience in Task Force Hawk, the House Armed Services Committee’s Subcommittee on Readiness held a hearing on July 1, 1999. That hearing focused on pilot shortages, the lack of pilot proficiency, and unit combat training. In addition, it discussed equipment that was not fully fielded at the time of the operation, such as aircraft survivability equipment and communication equipment. Our work was designed to address other matters associated with Task Force Hawk and how the services plan to address them for future operations. Doctrine is the fundamental principle by which the military services guide their actions in support of national objectives. It provides guidance for planning and conducting military operations. In the Army, doctrine is communicated in a variety of ways, including manuals, handbooks, and training. Joint doctrine, which applies to the coordinated use of two or more of the military services, is similarly communicated. Doctrine provides commanders with a framework for conducting operations while allowing flexibility to adapt operations to specific circumstances. According to Army and Joint Staff doctrine officials, the concept of operation that was planned to be used by Task Force Hawk, the use of Apache helicopters for a deep attack mission as part of an air campaign, fell within established Army and joint doctrine. Typically, attack helicopters are used in conjunction with Army ground forces to engage massed formations of enemy armor. They were used in this manner in the Gulf War. In the Kosovo air campaign, Task Force Hawk’s planned deep attacks differed in that they were intended to be part of an air campaign, not an Army led combined arms land campaign. Additionally, the aircraft’s planned attacks principally would have engaged widely dispersed and camouflaged enemy ground forces instead of massed formations. According to Army doctrine officials, doctrine is broad and flexible enough to allow a combatant commander to employ his assets in the manner that was planned for the task force. However, Army officials agree that this planned usage differed from the employment typically envisaged in Army doctrine. Furthermore, Army officials said that the Task Force Hawk experience was not something the Army routinely trained for and was considered to be an atypical operation. Although Task Force Hawk’s mission and operations were consistent with both Army and joint doctrine in the broadest sense, changes to doctrine at both the Army and joint levels are being made that will address some of the operation’s lessons learned. A total of 19 Army doctrine publications will be developed or modified to better address the experience gained from Task Force Hawk. Examples of new or revised doctrine include a new handbook on deep operations; an update to the Army’s keystone warfighting doctrinal publication on conducting campaigns, major operations, battles, engagements, and operations other than war; and an update to the Army aviation brigade field manual that expands the role of aviation brigades and task forces with a heavier emphasis on tactics, techniques, and procedures for task force, combined arms, and joint operations. Modifications to Army doctrine are being made as part of the on-going established process for reviewing and revising doctrinal publications. A total of five joint doctrine publications will be developed or modified based at least in part on the Task Force Hawk experience. A new joint publication is being developed to cover the role of the Joint Force Land Component Commander, detailing his role and responsibilities in a “supported” and “supporting” role. (See our discussion of this role in the Joint Operations section of this report.) Updates to four remaining joint publications, including close air support and fire support, will be made during the normal 21-month joint doctrine publication and review cycle. The Army has a large effort underway to collect and resolve lessons learned pertaining to Task Force Hawk. A total of 146 Task Force Hawk lessons learned were collected at three different sources. The U.S. Army Europe developed 64 lessons and forwarded them to the Army’s Deputy Chief of Staff for Operations and Plans for remedial action. The Army’s Training and Doctrine Command developed a listing of 76 lessons and has assigned them to their different proponent schools for remedial action. Hundreds of joint action items were collected at the European Command on Operation Allied Force and forwarded to the Joint Warfighting Center. Of these items, six were specifically associated with Task Force Hawk and were sent to the Joint Staff for remedial action. We analyzed the 146 Task Force Hawk lessons and determined that a number of them submitted by different organizations were the same. Of the 76 lessons raised by the Training and Doctrine Command, 38 were similar to those submitted by U.S. Army Europe. Of the six European Command lessons, we determined that one was similar to an issue submitted by U.S. Army Europe. Deleting the 39 duplicates resulted in a total of 107 unique lessons submitted for remedial action. We categorized the 107 lessons into five broad themes that in our judgment characterize the type of needed remedial action. The five themes are as follows. The need for revisions to Army and joint doctrine, as discussed earlier. We identified 19 such lessons. See appendix I. Improvements in command, control, communications, computers, and intelligence (C4I) equipment or procedures. We identified 20 such lessons. See appendix II. Areas needing additional training. We identified 30 such lessons. See appendix III. The need for additional capability in areas other than C4I. We identified 24 such lessons. See appendix IV. Potential force structure changes. We identified 14 such lessons. See appendix V. We determined the status of each of the 107 lessons learned as of January 2001. We did not evaluate the merit of the actions proposed or completed. We placed them into one of two status categories: Recommended for closure: We placed 47 items in this category. However, there are varying degrees of closure within this category. First, there are items that specifically have had actions completed, such as procuring night vision goggles for Apache pilots. According to Army officials, the goggles have been procured and fielded. Twenty-three of the 47 lessons fell into this subgroup. Second, there are lessons that have had actions taken, but will require a long lead-time for implementation, such as the procurement of survival radios and a deployable flight mission rehearsal system for aviation units. For example, while approval for the survival radios has been obtained, they will not begin fielding until fiscal year 2003. In addition, the Army has recommended an interim fix for a mission rehearsal system, but it is costly. The far-term solution is the joint mission planning system, which will not be fielded until 2007. Fifteen of the 47 lessons fell into this subgroup. Finally, there are items that Army officials are recommending for closure because, upon further review, they determined the lessons should not have been submitted or events have overtaken the initial lesson and they are no longer applicable. The remaining nine lessons fell into this subgroup. Lessons learned that were recommended for closure are indicated as such in appendixes I-V. In progress: We placed 60 lessons in this category. These items are still considered open issues by the Army officials tracking Task Force Hawk lessons learned and they have been assigned to responsible bodies for resolution. Seventeen of the 60 in progress lessons reside with the Department of the Army—Headquarters, 10 with the Joint Staff or Joint Forces Command, 27 with the Army’s Training and Doctrine Command, and 6 with U.S. Army Europe. Many issues remain open because they require efforts that are being incorporated into much larger overall Army projects, such as transformation or Flight School XXI, that will require a much longer time frame to implement. Other lessons learned remain open because efforts to address them are just beginning. Lessons learned where solutions are in progress are indicated as such in appendixes I-V. Figure 3 shows the 107 lessons learned issues by category and by status grouping. The Commanding General of U.S. Army Europe has emphasized the need to capitalize on the lessons learned from Kosovo and to focus on partnership with the Air Force. He is personally involved with the lessons learned process and considers the process and follow-up a personal commitment to U.S. Army Europe soldiers. During our visit to U.S. Air Forces in Europe, we were told that their commanding general has also placed a high priority on working together with the Army to address the lessons learned in conducting joint operations. While both commands have taken steps to resolve the issues, some of the remedial actions will require years to complete. In addition, over time the services assign new commanders and reassign the current commanders. We reported in 1999 that while the Army had established a program to validate that remedial action on past lessons learned were implemented, the program has not been very successful. Two key themes emerged from the lessons learned collected. One was the need for the Army and the Air Force to work together better jointly. The other theme was the interoperability of the two services’ command, control, communications, computers, and intelligence equipment. The Task Force Hawk experience highlighted difficulties in several areas pertaining to how the Army operates in a joint environment. One area was determining the most appropriate structure for integrating Army elements into a joint task force. Doctrine typically calls for a Joint Force Land Component Commander or an Army Force Commander to be a part of a joint task force with responsibility for overseeing ground elements during an operation. The command structure for the U.S. component of Operation Allied Force did not have a Joint Force Land Component Commander. Both Army officials and the Joint Task Force Commander in retrospect believe that this may have initially made it more difficult to integrate the Army into the existing joint task force structure. The lack of an Army Force Commander and his associated staff created difficulties in campaign planning because the traditional links with other joint task force elements were initially missing. These links would normally function as a liaison between service elements and coordinate planning efforts. Over time, an ad hoc structure had to be developed and links established. The Army has conducted a study to develop a higher headquarters design that would enable it to provide for a senior Army commander in a future Joint Task Force involving a relatively small Army force. This senior commander would be responsible for providing command, control, communications, computers, and intelligence capability to the joint task force. The study itself is complete, but testing of the design in an exercise is not scheduled until February 2002. A second area that the Army had difficulty with during its mission training was including its aircraft in the overall planning document that controls air attack assets. The plan, called an air tasking order, assigns daily targets or missions to subordinate units or forces. Air Force officials in Europe told us that they had difficulty integrating the Army’s attack helicopters into the air tasking order. According to U.S. Army Europe officials, there were no formalized procedures for how to include Army aviation into this planning document and they had little or no training on how to perform this function. The Army and the Air Force in Europe are developing joint tactics, techniques, and procedures for integrating Army assets into the air tasking order and are beginning to include this process in their joint exercises. A third area that the Army and the Air Force had difficulty with was targeting. As previously discussed, once the decision was made that Task Force Hawk would not conduct deep attacks, its resources were used to locate targets for the Air Force. According to U.S. Army Europe documentation, Army analysts in Europe had little or no training in joint targeting and analyzing targets in a limited air campaign. As a result, in the early days of the Army targeting role, mobile targets nominated by the Army did not meet Operation Allied Force criteria being used by the Air Force for verifying that targets were legitimate and, therefore, were not attacked. As the operation progressed, the two services learned each other’s procedures and criteria and worked together better. The Army and the Air Force in Europe are now formalizing the process used and are developing tactics, techniques, and procedures for attacking such targets and sharing intelligence. They are including these new processes in their joint exercises. The second major theme that emerged from the lessons learned was the interoperability of the command, control, communications, computers, and intelligence equipment. The Army is transitioning from a variety of battlefield command systems that it has used for years to a digitized suite of systems called the Army Battlefield Command system. During Operation Allied Force, Army elements used a variety of older and newer battlefield command systems that were not always interoperable with each other. The mission planning and targeting system used by the Apache unit in Albania during Task Force Hawk was one of the older systems and was not compatible with the system being used by the Army team that provided liaison with the Air Force at the air operations center. The Army liaison team used the new suite of Army digitized systems that will ultimately be provided to all Army combat forces. However, at the time of Task Force Hawk, the suite of systems was not fully fielded and not all the deployed personnel were trained on the new systems. Consequently, the Apache unit in Albania used the older systems, making it difficult to communicate with the liaison team and requiring the manual as opposed to electronic transfer of data. The older mission planning and targeting system used by the Apache unit in Albania was also not compatible with the Air Force system. The Air Force has a single digital battlefield command system. The Apache unit in Albania, using its older equipment, could not readily share data directly with the Air Force. In addition, the intelligence system being used by the Army at the unit level and at the liaison level could not directly exchange information with the Air Force. As was the case within the Army, personnel had to manually transfer data. This was time consuming and introduced the potential for transcription errors. The Army is continuing to field the new suite of systems. We have previously reported that the schedules for fielding these systems have slipped and the Army in Europe is not scheduled to receive the complete suite of new systems before 2005. When it is eventually fielded, this new suite of systems is expected to reduce if not eliminate the inability of the Army’s and the Air Force’s systems to work together. The commanding generals of the U.S. Army and U.S. Air Forces in Europe have made resolving the lessons learned identified from Task Force Hawk a high priority. They have already made progress in taking remedial action on a number of the lessons. However, many of the lessons will require a significant amount of time, sometimes years, for implementation. In addition, over time senior military leadership changes and we have found in the past that the Army has not been very successful in ensuring that remedial actions are brought to closure. To ensure that the Army maintains the momentum to take actions to resolve Task Force Hawk lessons learned, the Congress may want to consider requiring the Army to report on remedial actions taken to implement Task Force Hawk lessons. This could be in the form of periodic progress reports or another appropriate reporting approach that would meet congressional oversight needs. To determine how Task Force Hawk’s concept of operation compared to existing Army and joint doctrine, we reviewed Army and Joint Staff doctrine publications and were briefed on existing deep attack doctrine at the Army’s Training and Doctrine Command and the Army’s Aviation School. We then compared this information to Task Force Hawk’s concept of operation. We discussed which doctrine publications would be revised based on the Task Force Hawk experience with officials at the Army’s Training and Doctrine Command and the Joint Warfighting Center. To determine the number of Task Force Hawk lessons learned, we collected and reviewed Army lessons learned from the Army’s Deputy Chief of Staff for Operations and Plans, the Army’s Training and Doctrine Command, and the Center for Army Lessons Learned. We collected and reviewed joint lessons learned at the Office of the Joint Chiefs of Staff and the Joint Warfighting Center. To obtain an understanding of the lessons and their status, we discussed them with individuals directly involved with the Task Force Hawk operation or those directly involved in addressing the individual lessons. We discussed the lessons with individuals at the Army’s Aviation School, the Army’s Artillery School, U.S. Army Europe, U.S. Air Forces in Europe, and the U.S. European Command. To determine how well the Army and the Air Force worked together in Operation Allied Force, we collected documentation on joint operations and interoperability of equipment and interviewed personnel at the U.S. European Command, U.S. Army Europe, and U.S. Air Forces in Europe. We conducted our review from June 2000 through January 2001 in accordance with generally accepted government auditing standards. We reviewed the information in this report with the Department of Defense (DOD) officials and made changes where appropriate. DOD officials agreed with the facts in this report. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Greg Dahlberg, Acting Secretary of the Army; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. If you have any questions, please call me on (757) 552-8100. Key contributors to this report were Steve Sternlieb, Laura Durland, and Frank Smith. Training and Doctrine Command (TRADOC) Review Field Manual (FM)100-17—Mobilization, Deployment, Redeployment and Demobilization—to ensure that it meets the requirements of a strategic responsive Army. Review FM 100-17 for joint doctrine disconnects and implement the required changes to the pertinent field manuals. Review FM 100-17 and FM 100-17-4 to make sure the responsibilities of the major commands are adequately discussed. Conduct a mission analysis to determine if doctrine supports the goal of sustaining overmatch capabilities across the spectrum of conflict. Determine the operational impact of the Roberts Amendment, which prohibits use of funds for the deployment of U.S. armed forces to Yugoslavia, Albania, and Macedonia without congressional consultation, on alliance and coalition warfare. Recommended closed but requiring a long implementation period Revise publication FM 100-6 entitled Information Operations. Accelerate the implementation of doctrine and associated tactics, techniques, and procedures related to FM 3-13 action plan. Peace support operations doctrine needs to be updated and more fully developed. General support aviation doctrine and tactics, techniques, and procedures need to be developed and/or updated. There is no available mission-training plan for the Tactical Terminal Control System. Aviation war-fighting doctrine for the unmanned aerial vehicle employment with Army aviation is needed. Review the need to develop multi-service tactics, techniques, and procedures for Army aviation to support other services or functional components. Refine doctrine to enable better integration of Army units into joint command and control architecture. Develop joint tactics, techniques, and procedures for the employment of aircraft survivability equipment. Revise publication FM 100-5 entitled Operations. Headquarters Department of Army (HQDA) Revise publication FM 100-1 entitled The Army. Revise doctrine to include the use of echelons above division elements in the deep attack mission. All Source Analysis System, which gathers and fuses battlefield information to produce a correlated threat picture, is incompatible with other systems. Accelerate the timetable for fielding the next generation digital series of communications equipment. A 10-year fielding cycle is too slow. Improved survival radios are needed for aviation units. Upgrade Army aircraft communications capabilities to include satellite communication capabilities. The Army requires an airborne battlefield command and control center to conduct deep attack missions over extended distances. Joint intelligence tactics, techniques, and procedures are lacking. Joint analysis is lacking. The primary problem in joint intelligence operations is a lack of service/joint interoperability of intelligence systems. Additional facilities and capabilities to increase bandwidth within the intelligence and signal communities are needed. Joint intelligence, doctrine, and training need to be better coordinated and integrated. Second generation forward-looking infrared sensors are needed. The Dual Datalink, which supports intelligence operations, must be replaced. The Army space support team needs improved technologies, including a direct satellite downlink capability, to provide satellite imagery to the warfighter. Command, control, communications, computers, and intelligence operations, organizations, and materiel for the Army in a supporting role needs to be analyzed. (TRADOC has expanded this single issue to 32 separate issues.) Determine the appropriate design and augmentation required to enable a division or corps to act as an Army Force Commander, which would provide command, control, communications, computers, and intelligence to the forces. The current Battle Command Training Program fails to adequately address the joint/combined operational environment of current and future contingencies. Increased individual, crew, and junior leader development training is needed. Platoon Leader/Company Commander certification and training is inadequate as currently executed. Increase the level of survival, evasion, resistance, and escape training. A joint/combined multinational training event is required. Increased officer, noncommissioned officer, and advanced individual training is needed. Revise training to ensure new Apache helicopter pilots are basic mission qualified. There is a need for signal intelligence survey teams in the Army. Fully fund ammunition requirements for appropriate aviator training to include advanced gunnery. Provide a realistic radar threat generator for flight training. The current system only replicates a minimal amount of threat systems. U.S. Army Europe needs to continue efforts to remove, extend, or modify the current night flight, frequency management, and radar utilization restrictions in Germany to support training. Simplify procedures for obtaining identification of friend or foe interrogation training. Require and resource for each attack squadron a complete Combat Maneuver Training Center force-on-force rotation. Emphasize how the major commands fit into the Joint Deployment Process. The services need to continually reinforce and train on joint deep operations in order to maximize warfighting capabilities. Integrate high gross weight operations and complex terrain training in simulation mission scenarios. Utilize simulation to drive training scenarios. Aviation mission planning systems rehearsal tool for individual and crew utilization does not meet training requirements. Review and ensure applicability of digitized systems. Develop a deployment training exercise with the objectives of understanding the deployment process and developing synchronized movement plans. The Army needs to continue to support and deploy systems, such as the Deployable Weather Satellite Workstation, that autonomously process weather satellite imagery and data. Recommended closed but requiring a long implementation period Field a deployable flight mission rehearsal system. Field a night vision system compatible with nuclear biological chemical masks. Develop and field a new time-phased force and deployment data system. Upgrade Army aviation mission simulators. Procure and field the aviation combined-arms training suite into brigade and below training. Develop, resource, train, and sustain a combat search and rescue capability. The Apache helicopter requires extended range/self-deployment fuel tanks that are crashworthy. Upgrade Army aviation aircraft survivability equipment. Modify Apache Longbow to meet specific theater requirements to include better night vision systems, more powerful engines, increased communications, and better aircraft survivability equipment. The Army requires a self-contained lethal and non-lethal joint suppression of enemy air defenses capability. Field additional tactical engagement simulation systems to the Combat Maneuver Training Center as well as what is currently funded for the Apache Longbow. Fund the Apache helicopter self-deployment capability to include instrument flight rules and an approved global positioning system. Fund the procurement of aviation life support equipment for over-water operations. The closed loop facility at Ramstein, Germany, requires additional equipment for major strategic air deployments. U.S. Army Europe requires an alternate strategic deployment airfield. Fund Robertson fuel tanks and rotor blade anti/de-ice capability. Continue research and development of imagery transmission systems.
The Army deployed its team, called Task Force Hawk, to participate in a Kosovo combat operation known as Operation Allied Force. This report (1) examines how Task Force Hawk's concept of operation compared to Army and joint doctrine, (2) reviews the lessons learned identified from the operation and determines the status of actions to address those lessons, and (3) examines the extent to which the Army and the Air Force were able to operate together as a joint force. GAO concludes that Task Force Hawk's deep attacks against Serbian forces in Kosovo was consistent with doctrine, but was not typical in that the task force was supporting an air campaign rather than its more traditional role of being used in conjunction with Army ground forces to engage massed formations of enemy armor. The Army identified 107 items that require remedial action. As of January 2001, 47 of the 107 items had been recommended for closure. Action is in process for the remaining 60 lessons. Finally, the Army and the Air Force experienced significant problems in their ability to work together jointly and the interoperability of the command, control, communications, computers, and intelligence equipment used during the operation. The Army is working on both issues aggressively. However, it will take time for results to be seen.
Each weekday, 11.3 million passengers in 35 metropolitan areas and 22 states use some form of rail transit (commuter, heavy, or light rail). Commuter rail systems typically operate on railroad tracks and provide regional service between a central city and adjacent suburbs. Commuter rail systems are traditionally associated with older industrial cities, such as Boston, New York, Philadelphia, and Chicago. Heavy rail systems— subway systems like New York City’s transit system and Washington, D.C.’s Metro—typically operate on fixed rail lines within a metropolitan area and have the capacity for a heavy volume of traffic. Amtrak operates the nation’s primary intercity passenger rail service over a 22,000-mile network, primarily over freight railroad tracks. Amtrak serves more than 500 stations (240 of which are staffed) in 46 states and the District of Columbia, and it carried more than 25 million passengers during FY 2005. According to passenger rail officials and passenger rail experts, certain characteristics of domestic and foreign passenger rail systems make them inherently vulnerable to terrorist attacks and therefore difficult to secure. By design, passenger rail systems are open, have multiple access points, are hubs serving multiple carriers, and, in some cases, have no barriers so that they can move large numbers of people quickly. In contrast, the U.S. commercial aviation system is housed in closed and controlled locations with few entry points. The openness of passenger rail systems can leave them vulnerable because operator personnel cannot completely monitor or control who enters or leaves the systems. In addition, other characteristics of some passenger rail systems—high ridership, expensive infrastructure, economic importance, and location (large metropolitan areas or tourist destinations)—also make them attractive targets for terrorists because of the potential for mass casualties and economic damage and disruption. Moreover, some of these same characteristics make passenger rail systems difficult to secure. For example, the numbers of riders that pass through a subway system—especially during peak hours—may make the sustained use of some security measures, such as metal detectors, difficult because they could result in long lines that could disrupt scheduled service. In addition, multiple access points along extended routes could make the cost of securing each location prohibitive. Balancing the potential economic impacts of security enhancements with the benefits of such measures is a difficult challenge. Securing the nation’s passenger rail systems is a shared responsibility requiring coordinated action on the part of federal, state, and local governments; the private sector; and rail passengers who ride these systems. Since the September 11th attacks, the role of federal government agencies in securing the nation’s transportation systems, including passenger rail, have continued to evolve. Prior to September 11th, FTA and FRA, within DOT, were the primary federal entities involved in passenger rail security matters. In response to the attacks of September 11th, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring the security of all modes of transportation, though its provisions focus primarily on aviation security. The act also gave TSA regulatory authority for security over all transportation modes, though its provisions focus primarily aviation security. With the passage of the Homeland Security Act of 2002, TSA was transferred, along with over 20 other agencies, to the Department of Homeland Security. Within DHS, the Office of Grants and Training (OGT), formerly the Office for Domestic Preparedness (ODP), has become the federal source for security funding of passenger rail systems. OGT is the principal component of DHS responsible for preparing the United States for acts of terrorism and has primary responsibility within the executive branch for assisting and supporting DHS, in coordination with other directorates and entities outside of the department, in conducting risk analysis and risk management activities of state and local governments. In carrying out its mission, OGT provides training, funds for the purchase of equipment, support for the planning and execution of exercises, technical assistance, and other support to assist states, local jurisdictions, and the private sector to prevent, prepare for, and respond to acts of terrorism. OGT created and is administering two grant programs focused specifically on transportation security, the Transit Security Grant Program and the Intercity Passenger Rail Security Grant Program. These programs provide financial assistance to address security preparedness and enhancements for passenger rail and transit systems. During fiscal year 2006, OGT provided $110 million to passenger rail transit agencies through the Transit Security Grant Program and about $7 million to Amtrak through the Intercity Passenger Rail Security Grant Program. While TSA is the lead federal agency for ensuring the security of all transportation modes, FTA conducts safety and security activities, including training, research, technical assistance, and demonstration projects. In addition, FTA promotes safety and security through its grant- making authority. FRA has regulatory authority for rail safety over commuter rail operators and Amtrak, and employs over 400 rail inspectors that periodically monitor the implementation of safety and security plans at these systems. State and local governments, passenger rail operators, and private industry are also important stakeholders in the nation’s rail security efforts. State and local governments may own or operate a significant portion of the passenger rail system. Passenger rail operators, which can be public or private entities, are responsible for administering and managing passenger rail activities and services. Passenger rail operators can directly operate the service provided or contract for all or part of the total service. Although all levels of government are involved in passenger rail security, the primary responsibility for securing passenger rail systems rests with passenger rail operators. Risk management is a tool for informing policy makers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. In recent years, the President, through Homeland Security Presidential Directives (HSPDs), and Congress, through the Intelligence Reform and Terrorism Prevention Act of 2004, provided for federal agencies with homeland security responsibilities to apply risk-based principles to inform their decision making regarding allocating limited resources and prioritizing security activities. The 9/11 Commission recommended that the U.S. government should identify and evaluate the transportation assets that need to be protected, set risk-based priorities for defending them, select the most practical and cost-effective ways of doing so, and then develop a plan, budget, and funding to implement the effort. In addition, DHS issued the National Strategy for Transportation Security in 2005 that describes the policies the DHS will apply when managing risks to the security of the U.S. transportation system. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. Risk management, as applied in the homeland security context, can help federal decision-makers determine where and how to invest limited resources within and among the various modes of transportation. The Homeland Security Act of 2002 also directed the department’s Directorate of Information Analysis and Infrastructure Protection to use risk management principles in coordinating the nation’s critical infrastructure protection efforts. This includes integrating relevant information, analysis, and vulnerability assessments to identify priorities for protective and support measures by the department, other federal agencies, state and local government agencies and authorities, the private sector, and other entities. Homeland Security Presidential Directive 7 and the Intelligence Reform and Terrorism Prevention Act of 2004 further define and establish critical infrastructure protection responsibilities for DHS and those federal agencies given responsibility for particular industry sectors, such as transportation. In June 2006, DHS issued the National Infrastructure Protection Plan (NIPP), which named TSA as the primary federal agency responsible for coordinating critical infrastructure protection efforts within the transportation sector. The NIPP requires federal agencies to work with the private sector to develop plans that, among other things, identify and prioritize critical assets for their respective sectors. As such, the NIPP requires TSA to conduct and facilitate risk assessments in order to identify, prioritize, and coordinate the protection of critical transportation systems infrastructure, as well as develop risk based priorities for the transportation sector. To provide guidance to agency decision makers, we have created a risk management framework, which is intended to be a starting point for applying risk based principles. Our risk management framework entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. DHS’s National Infrastructure Protection Plan describes a risk management process that closely mirrors our risk management framework. Setting strategic goals, objectives, and constraints is a key first step in applying risk management principles and helps to ensure that management decisions are focused on achieving a purpose. These decisions should take place in the context of an agency’s strategic plan that includes goals and objectives that are clear and concise. These goals and objectives should identify resource issues and external factors to achieving the goals. Further, the goals and objectives of an agency should link to a department’s overall strategic plan. The ability to achieve strategic goals depends, in part, on how well an agency manages risk. The agency’s strategic plan should address risk related issues that are central to the agency’s overall mission. Risk assessment, an important element of a risk based approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application often involves assessing three key elements—threat, vulnerability, and criticality or consequence. A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. A criticality or consequence assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes are relatively more important to protect from attack. Information from these three assessments contributes to an overall risk assessment that characterizes risks on a scale such as high, medium, or low and provides input for evaluating alternatives and management prioritization of security initiatives. The risk assessment element in the overall risk management cycle may be the largest change from standard management steps and can be important to informing the remaining steps of the cycle. DHS component agencies have taken a variety of steps to assess the risk posed by terrorism to U.S. passenger rail systems. The DHS OGT developed and implemented a risk assessment methodology intended to help passenger rail operators better respond to terrorist attacks and prioritize security measures. Passenger rail operators must have completed a risk assessment to be eligible for financial assistance through the fiscal year 2007 OGT Transit Security Grant Program, which includes funding for passenger rail. To receive grant funding, rail operators are also required to have a security and emergency preparedness plan that identifies how the operator intends to respond to security gaps identified by risk assessments. As of January 2007, OGT had completed or planned to conduct risk assessments of most passenger rail operators. According to rail operators, OGT’s risk assessment process enabled them to prioritize investments based on risk and are allowing them to target and allocate resources toward security measures that will have the greatest impact on reducing risk across their system. TSA has also begun to assess risks to the passenger rail system. TSA had completed an overall threat assessment for both mass transit and passenger and freight rail modes. TSA also conducted criticality assessments of nearly 700 passenger rail stations and had begun conducting assessments for other passenger rail assets such as bridges and tunnels. TSA plans to rely on asset criticality rankings to prioritize which assets it will focus on in conducting vulnerability assessments to determine which passenger rail assets are vulnerable to attack. For assets that are deemed to be less critical, TSA has developed a software tool that it has made available to passenger rail and other transportation operators for them to use on a voluntary basis to assess the vulnerability of their assets. Until all three assessments of passenger rail systems—threat, criticality, and vulnerability—have been completed, and until TSA determines how to use the results of these assessments to analyze and characterize the level of risk (high, medium, or low), it will be difficult to prioritize passenger rail assets and guide investment decisions about protecting them. Finalizing a methodology for assessing risk to passenger rail and other transportation assets and conducting risk assessments are also key steps used in producing the Transportation Sector Specific Plan (TSSP) required by HSPD-7. According to TSA, the TSSP and supporting plans for each mode of transportation have been completed and are currently being reviewed by DHS and the White House Homeland Security Council. As of January 2007, TSA had not completed a comprehensive risk assessment of the passenger rail system. As TSA, OGT, and other federal agencies, including DOT, move forward with risk assessment activities, DHS is developing a framework intended to help these agencies work with their stakeholders to assess risk. This framework is intended to help the private sector and state and local governments develop a consistent approach to analyzing risk and vulnerability across infrastructure types and across entire economic sectors, develop consistent terminology, and foster consistent results. The framework is also intended to enable a federal-level assessment of risk in general, and comparisons among risks, for purposes of resource allocation and response planning. DHS has informed TSA that this framework will provide overarching guidance to sector-specific agencies on how various risk assessment methodologies may be used to analyze, normalize, and prioritize risk within and among sectors. Because neither this element nor the framework as a whole has been finalized or provided to TSA or other sector-specific agencies, it is not clear what impact, if any, DHS’s framework may have on ongoing risk assessments conducted by, and the methodologies used by, TSA, OGT, and others, and whether or how DHS will be able to use these results to compare risks and prioritize homeland security investments among sectors. Until DHS finalizes this framework, and until TSA completes its risk assessment methodology, it will not be possible to determine whether different methodologies used by TSA and OGT for conducting threat, criticality, and vulnerability assessments generate disparate qualitative and quantitative results or how they can best be compared and analyzed. In addition, coordinated risk assessments will help TSA and others avoid duplicative efforts and determine whether other agencies’ risk assessment methodologies, and the data generated by these methodologies, can be leveraged to complete assessments required for the transportation sector. In addition to the ongoing initiatives to enhance passenger rail security conducted by the FTA and FRA before and after September 11, 2001, TSA issued security directives to passenger rail operators after the March 2004 terrorist attacks on the rail system in Madrid. However, federal and rail industry stakeholders have questioned the extent that these directives were based on industry best practices and expressed confusion about how TSA would monitor compliance with the directives. Since the completion of our work on passenger rail security, TSA has reported taking additional actions to strengthen the security of the passenger rail system. For example, TSA has tested rail security technologies, developed training tools for rail workers, and issued a proposed rule in December 2006 regarding passenger and freight rail security, among other efforts. TSA has also taken steps to better coordinate with DOT regarding rail security roles and responsibilities. The memorandum of understanding between DHS and DOT had been recently updated to include specific agreements between TSA and FTA and FRA to delineate security-related roles and responsibilities, among other things, for passenger rail and mass transit. Prior to the creation of TSA in November 2001, FTA and FRA, within DOT, were primarily responsible for the security of passenger rail systems. These agencies undertook a number of initiatives to enhance the security of passenger rail systems after the September 11th attacks that are still in place today. Specifically, FTA launched a transit security initiative in 2002 that included security readiness assessments, technical assistance, grants for emergency response drills, and training. FTA instituted the Transit Watch campaign in 2003—a nationwide safety and security awareness program designed to encourage the participation of transit passengers and employees in maintaining a safe transit environment. The program provides information and instructions to transit passengers and employees so that they know what to do and whom to contact in the event of an emergency in a transit setting. FTA planned to continue this initiative, in partnership with TSA and OGT, and offer additional security awareness materials that address unattended bags and emergency evacuation procedures for transit agencies. In addition, FTA has issued guidance, such as its Top 20 Security Program Action Items for Transit Agencies, which recommends measures for passenger rail operators to implement into their security programs to improve both security and emergency preparedness. FTA has also used research and development funds to develop guidance for security design strategies to reduce the vulnerability of transit systems to acts of terrorism. In November 2004, FTA provided rail operators with security considerations for transportation infrastructure. This guidance provides recommendations intended to help operators deter and minimize attacks against their facilities, riders, and employees by incorporating security features into the design of rail infrastructure. FRA has also taken a number of actions to enhance passenger rail security since September 11, 2001. For example, it has assisted commuter railroads in developing security plans, reviewed Amtrak’s security plans, and helped fund FTA security readiness assessments for commuter railroads. In the wake of the Madrid terrorist bombings in March 2004, nearly 200 FRA inspectors, in cooperation with DHS, conducted inspections of each of the 18 commuter railroads and Amtrak to determine what additional security measures had been put into place to prevent a similar occurrence in the United States. FRA also conducted research and development projects related to passenger rail security. These projects included rail infrastructure security and trespasser monitoring systems and passenger screening and manifest projects, including explosives detection. Although FTA and FRA now play a supporting role in transportation security matters since the creation of TSA, they remain important partners in the federal government’s efforts to strengthen rail security, given their role in funding and regulating the safety of passenger rail systems. Moreover, as TSA moves ahead with its passenger rail security initiatives, FTA and FRA are continuing their passenger rail security efforts. In May 2004, TSA issued security directives to the passenger rail industry to establish standard security measures for all passenger rail operators, including Amtrak. However, as we previously reported, it was unclear how TSA developed the requirements in the directives, how TSA planned to monitor and ensure compliance, how rail operators were to implement the measures, and which entities were responsible for their implementation. According to TSA, the directives were based upon FTA and American Public Transportation Association best practices for rail security. Specifically, TSA stated that it consulted a list of the top 20 actions FTA identified that rail operators can take to strengthen security. While some of the directives correlate to information contained in the FTA guidance, the source for many of the directives is unclear. Amtrak and FRA officials also raised concerns about some of the directives. For example, FRA officials stated that current FRA safety regulations requiring engineer compartment doors be kept unlocked to facilitate emergency escapes conflicts with the TSA security directive requirement that doors equipped with locking mechanisms be kept locked. Other passenger rail operators we spoke to during our review stated that TSA did not adequately consult with the rail industry prior to developing and issuing these directives. With respect to how the directives were to be enforced, rail operators were required to allow TSA and DHS to perform inspections, evaluations, or tests based on execution of the directives at any time or location. TSA officials stated the agency has hired 100 surface transportation inspectors, whose stated mission is to, among other duties, monitor and enforce compliance with TSA’s rail security directives. However, some passenger rail operators have expressed confusion and concern about the role of TSA’s inspectors and the potential that TSA inspections could be duplicative of other federal and state rail inspections. TSA rail inspector staff stated that they were committed to avoiding duplication in the program and communicating their respective roles to rail agency officials. According to TSA, since the initial deployment of surface inspectors, these inspectors have developed relationships with security officials in passenger rail and transit systems, coordinated access to operations centers, participated in emergency exercises, and provided assistance in enhancing security. We will continue to assess TSA’s enforcement of rail security directives during our follow-on review of passenger rail security. In January 2007, TSA provide us an update on additional actions they had taken to strengthen passenger rail security. We have not verified or evaluated these actions. These actions include: National explosive canine detection teams: Since late 2005, TSA reported that it has trained and deployed 53 canine teams to 13 mass transit systems to help detect explosives in the passenger rail system and serve as a deterrent to potential terrorists. Visible Intermodal Prevention and Response Teams: This program is intended to provide teams of law enforcement, canines, and inspection personnel to mass transit and passenger rail systems to deter and detect potential terrorist actions. Since the program’s inception in December 2005, TSA reported conducting more than 25 exercises at mass transit and passenger rail systems throughout the nation. Mass Transit and Passenger Rail Security Information Sharing Network: According to TSA, the agency initiated this program in August 2005 to develop information sharing and dissemination processes regarding passenger rail and mass transit security across the federal government, state and local governments, and rail operators. National Transit Resource Center: TSA officials stated that they are working with FTA and DHS OGT to develop this center, which will provide transit agencies nationwide with pertinent information related to transit security, including recent suspicious activities, promising security practices, new security technologies, and other information. National Security Awareness Training Program for Railroad Employees: TSA officials stated that the agency has contracted to develop and distribute computer based training for passenger rail, rail transit, and freight rail employees. The training will include information on identifying security threats, observing and reporting suspicious activities and objects, mitigating security incidents, and other related information. According to TSA, the training will be distributed to all passenger and freight rail systems. Transit Terrorist Tool and Tactics: This training course is funded through the Transit Security Grant Program and teaches transit employees how to prevent and respond to a chemical, biological, radiological, nuclear, or explosive attack. According to TSA, this course was offered for the first time during the fall of 2006. National Tunnel Security Initiative: This DHS and DOT initiative aims to identify and assess risks to underwater tunnels, prioritize security funding to the most critical areas, and develop technologies to better secure underwater tunnels. According to TSA, this initiative has identified a list of 29 critical underwater rail transit tunnels. TSA has also sought to enhance passenger rail security by conducting research on technologies related to screening passengers and checked baggage in the passenger rail environment. TSA conducted a Transit and Rail Inspection Pilot. The pilot was a $1.5 million effort to test the feasibility of using existing and emerging technologies to screen passengers, carry-on items, checked baggage, cargo, and parcels for explosives. TSA officials told us that based upon preliminary analyses, the screening technologies and processes tested would be very difficult to implement on heavily used passenger rail systems because these systems carry high volumes of passengers and have multiple points of entry. However, TSA officials added that the screening processes used in the pilot may be useful on certain long-distance intercity train routes, which make fewer stops. Further, TSA officials stated that screening could be used either randomly or for all passengers during certain high-risk events or in areas where a particular terrorist threat is known to exist. For example, screening technology similar to that used in the pilot was used by TSA to screen certain passengers and belongings in Boston and New York rail stations during the 2004 Democratic and Republican national conventions. According to TSA, the agency is also researching and developing other passenger rail security technologies, including closed circuit television systems that can detect suspicious behavior, mobile passenger screening checkpoints to be used at rail stations, bomb resistant trash cans, and explosive detection equipment for use in the rail environment. More recently, in December 2006, TSA issued a proposed rule regarding passenger and freight rail security requirements. TSA’s proposed rule would require that passenger and freight rail operators, certain facilities that ship or receive hazardous materials by rail, and rail transit systems take the following actions: Designate a rail security coordinator to be available to TSA on a 24 hour, seven day a week basis to serve as the primary contact for the receipt of intelligence and other security related information. Immediately report incidents, potential threats, and security concerns to TSA. Allow TSA and DHS officials to enter and conduct inspections, test, and perform other duties within their rail systems. Provide TSA, upon request, with the location and shipping information of rail cars that contain a specific category and quantity of hazardous materials within one hour of receiving the request from TSA. Provide for a secure chain of custody and control of rail cars containing a specified quantity and type of hazardous material. Public comments on the proposed rule are due in February 2007. TSA plans to review these comments and issue a final rule in the future. With multiple DHS and DOT stakeholders involved in securing the U.S. passenger rail system, the need to improve coordination between the two agencies has been a consistent theme in our prior work in this area. In response to a previous recommendation we made, DHS and DOT signed a memorandum of understanding (MOU) to develop procedures by which the two departments could improve their cooperation and coordination for promoting the safe, secure, and efficient movement of people and goods throughout the transportation system. The MOU defines broad areas of responsibility for each department. For example, it states that DHS, in consultation with DOT and affected stakeholders, will identify, prioritize, and coordinate the protection of critical infrastructure. The MOU between DHS and DOT represents an overall framework for cooperation that is to be supplemented by additional signed agreements, or annexes, between the departments. These annexes are to delineate the specific security related roles, responsibilities, resources, and commitments for mass transit, rail, research and development, and other matters. TSA signed annexes to the MOU with FRA and FTA describing the roles and responsibilities of each agency regarding passenger rail security. These annexes also describe how TSA and these DOT agencies will coordinate security related efforts, avoid duplicating these efforts, and improve coordination and communication with industry stakeholders. U.S. passenger rail operators have taken numerous actions to secure their rail systems since the terrorist attacks of September 11, 2001, in the United States, and the March 11, 2004, attacks in Madrid. These actions included both improvements to system operations and capital enhancements to a system’s facilities, such as tracks, buildings, and train cars. All of the U.S. passenger rail operators we contacted have implemented some types of security measures—such as increased numbers and visibility of security personnel and customer awareness programs—that were generally consistent with those we observed in select countries in Europe and Asia. We also identified three rail security practices—covert testing, random screening of passengers and their baggage, and centralized research and testing—utilized by foreign operators or their governments that were not utilized by domestic rail operators or the U.S. government at the time of our review. Both U.S. and foreign passenger rail operators we contacted have implemented similar improvements to enhance the security of their systems. A summary of these efforts follows. Customer awareness: Customer awareness programs we observed used signage and announcements to encourage riders to alert train staff if they observed suspicious packages, persons, or behavior. Of the 32 domestic rail operators we interviewed, 30 had implemented a customer awareness program or made enhancements to an existing program. Foreign rail operators we visited also attempted to enhance customer awareness. For example, 11 of the 13 operators we interviewed had implemented a customer awareness program. Increased number and visibility of security personnel: Of the 32 U.S. rail operators we interviewed, 23 had increased the number of security personnel they utilized since September 11th, to provide security throughout their system or had taken steps to increase the visibility of their security personnel. Several U.S. and foreign rail operators we spoke with had instituted policies such as requiring their security staff, in brightly colored vests, to patrol trains or stations more frequently, so they are more visible to customers and potential terrorists or criminals. These policies make it easier for customers to contact security personnel in the event of an emergency, or if they have spotted a suspicious item or person. At foreign sites we visited, 10 of the 13 operators had increased the number of their security officers throughout their systems in recent years because of the perceived increase in risk of a terrorist attack. Increased use of canine teams: Of the 32 U.S. passenger rail operators we contacted, 21 were suing canines to patrol their facilities or trains. Often, these units are used to detect the presence of explosives, and may be called in when a suspicious package is detected. In foreign countries we visited, passenger rail operators’ use of canines varied. In some Asian countries, canines were not culturally accepted by the public and thus were not used for rail security purposes. As in the United States, and in contrast to Asia, most European passenger rail operators used canines for explosive detection or as deterrents. Employee training: All of the domestic and foreign rail operators we interviewed had provided some type of security training to their staff, either through in-house personnel or an external provider. In many cases, this training consisted of ways to identify suspicious items and persons and how to respond to events once they occur. For example, the London Underground and the British Transport Police developed the “HOT” method for its employees to use to identify suspicious items in the rail system. In the HOT method, employees are trained to look for packages or items that are Hidden, Obviously suspicious, and not Typical of the environment. Passenger and baggage screening practices: Some domestic and foreign rail operators have trained employees to recognize suspicious behavior as a means of screening passengers. Eight U.S. passenger rail operators we contacted were utilizing some form of behavioral screening. Abroad, we found that 4 of 13 operators we interviewed had implemented forms of behavioral screening. All of the domestic and foreign rail operators we contacted have ruled out an airport-style screening system for daily use in heavy traffic, where each passenger and the passenger’s baggage are screened by a magnetometer or X-ray machine, based on cost, staffing, and customer convenience factors, among other reasons. Upgrading technology: Many rail operators we interviewed had embarked on programs designed to upgrade their existing security technology. For example, we found that 29 of the 32 U.S. operators had implemented a form of closed circuit television (CCTV) to monitor their stations, yards, or trains. While these cameras cannot be monitored closely at all times, because of the large number of staff that would be required, many rail operators felt that the cameras acted as a deterrent, assisted security personnel in determining how to respond to incidents that had already occurred, and could be monitored if an operator had received information that an incident may occur at a certain time or place in their system. Abroad, all 13 of the foreign rail operators we visited had CCTV systems in place. In addition, 18 of the 32 U.S. rail operators we interviewed had installed new emergency phones or enhanced the visibility of the intercom systems they already had. As in the United States, a few foreign operators had implemented chemical or biological detection devices at these rail stations, but their use was not widespread. Two of the 13 foreign operators we interviewed had implemented these sensors, and both were doing so on an experimental basis. In addition, police officers from the British Transport Police—responsible for policing the rail system in the United Kingdom—were equipped with pagers to detect chemical, biological, or radiological elements in the air, allowing them to respond quickly in case of a terrorist attack using one of these methods. Access control: Tightening access control procedures at key facilities or rights-of-way is another way many rail operators have attempted to enhance security. A majority of domestic and selected foreign passenger rail operators had invested in enhanced systems to control unauthorized access at employee facilities and stations. Specifically, 23 of the 32 U.S. operators had installed a form of access control at key facilities and stations. All 13 foreign operators had implemented some form of access control to their critical facilities or rights-of-way. Rail system design and configuration: In an effort to reduce vulnerabilities to terrorist attack and increase security, passenger rail operators in the United States and abroad have been, or are now beginning to, incorporate security features into the design of new and existing rail infrastructure, primarily rail stations. For example, of the 32 domestic rail operators we contacted, 22 of them had removed their conventional trash bins entirely, or replaced them with transparent or bomb-resistant trash bins, as TSA instructed in its May 2004 security directives. Foreign rail operators had also taken steps to remove traditional trash bins from their systems. Of the 13 operators we visited, 8 had either removed their trash bins entirely or replaced them with blast-resistant cans or transparent receptacles. Many foreign rail operators are also incorporating aspects of security into the design of their rail infrastructure. Of the 13 operators we visited, 11 had attempted to design new facilities with security in mind and had retrofitted older facilities to incorporate security-related modifications. For example, one foreign operator we visited was retrofitting its train cars with windows that passengers could open in the event of a chemical attack. In addition, the London Underground incorporates security into the design of all its new stations as well as when existing stations are modified. We observed several security features in the design of Underground stations, such as using vending machines that have no holes that someone could use to hide a bomb, and sloped tops to reduce the likelihood that a bomb can be placed on top of the machine. In addition, stations are designed to provide staff with clear lines of sight to all areas of the station, such as underneath benches or ticket machines, and station designers try to eliminate or restrict access to any recessed areas where a bomb could be hidden. Figure 1 shows a diagram of several security measures that we observed in passenger rail stations both in the United States and abroad. K-9 patrol unit(s) In our past work, we found that Amtrak faces security challenges unique to intercity passenger rail systems. First, Amtrak operates over thousands of miles, often far from large population centers. This makes its route system more difficult to patrol and monitor than one contained in a particular metropolitan region, and it causes delays in responding to incidents when they occur in remote areas. Also, outside the Northeast Corridor, Amtrak operates almost exclusively on tracks and in stations owned by freight rail companies. This means that Amtrak often cannot make security improvements to others’ rights-of-way or station facilities and that it is reliant on the staff of other organizations to patrol their facilities and respond to incidents that may occur. Furthermore, with over 500 stations, only half of which are staffed, screening even a small portion of the passengers and baggage boarding Amtrak trains is difficult. Finally, Amtrak’s financial condition has never been strong—Amtrak has been on the edge of bankruptcy several times. Amid the ongoing challenges of securing its coast-to-coast railway, Amtrak has taken some actions to enhance security throughout its intercity passenger rail system. For example, Amtrak initiated a passenger awareness campaign, began enforcing restrictions on carry-on luggage that limit passengers to two carry-on bags, not exceeding 50 pounds; began requiring passengers to show identification after boarding trains; increased the number of canine units patrolling its system looking for explosives or narcotics; and assigned some of its police to ride trains in the Northeast Corridor. Also, Amtrak instituted a policy of randomly inspecting checked baggage on its trains. Lastly, Amtrak is making improvements to the emergency exits in certain tunnels to make evacuating trains in the tunnels easier in the event of a crash or terrorist attack. While many of the security practices we observed in foreign rail systems are similar to those U.S. passenger rail operators are implementing, we identified three foreign practices that were not currently in use among the U.S. passenger rail operators we contacted as of September 2005, nor were they performed by the U.S. government. These practices are as follows. Covert testing: Two of the 13 foreign rail systems we visited utilized covert testing to keep employees alert about their security responsibilities. Covert testing involves security staff staging unannounced events to test the response of railroad staff to incidents such as suspicious packages or setting off alarms. In one European system, this covert testing involves security staff placing suspicious items throughout their system to see how long it takes operating staff to respond to the item. Similarly, one Asian rail operator’s security staff will break security seals on fire extinguishers and open alarmed emergency doors randomly to see how long it takes staff to respond. TSA conducts covert testing of passenger and baggage screening in aviation, but has not conducted such testing in the rail environment. Random screening: Of the 13 foreign operators we interviewed, 2 have some form of random screening of passengers and their baggage in place. Prior to the July 2005 London bombings, no passenger rail operators in the United States were practicing random passengers or baggage screening. However, during the Democratic National Convention in 2004, the Massachusetts Bay Transportation Authority (MBTA) instituted a system of random screening of passengers. National government clearinghouse on technologies and best practices: According to passenger rail operators in five countries we visited, their national governments had centralized the process for performing research and development of passenger rail security technologies and maintained a clearinghouse of technologies and security best practices for passenger rail operators. No U.S. federal agency has compiled or disseminated information on research and development and other best practices for U.S. rail operators. Implementing covert testing, random screening, or a government- sponsored clearinghouse for technologies and best practices in the U.S. could pose political, legal, fiscal, and cultural challenges because of the differences between the U.S. and these foreign nations. Many foreign nations have dealt with terrorist attacks on their public transportation systems for decades, compared with the United States, where rail has not been specifically targeted by terrorists. According to foreign rail operators, these experiences have resulted in greater acceptance of certain security practices, such as random searches, which the U.S. public may view as a violation of their civil liberties or which may discourage them from using public transportation. The impact of security measures on passengers is an important consideration for domestic rail operators, since most passengers could choose another means of transportation, such as a personal automobile. As such, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from rail and into their cars. In contrast, the citizens of the European and Asian countries we visited are more dependent on public transportation than most U.S. residents and therefore may be more willing to accept intrusive security measures. Nevertheless, in order to identify innovative security measures that could help further mitigate terrorism- risks to rail assets—especially as part of a broader risk management approach discussed earlier—it is important to consider the feasibility and costs and benefits of implementing the three rail security practices we identified in foreign countries. Officials from DHS, DOT, passenger rail industry associations, and rail systems we interviewed told us that operators would benefit from such an evaluation. Since our report on passenger rail security was issued, TSA has reported taking steps to coordinate with foreign passenger rail operators and governments to identify security best practices. For example, TSA reported working with British rail security officials to identify best practices for detecting and handling suspicious packages in rail systems. In conclusion, Mr. Chairman, the July 2005 London rail bombings made clear that even when a variety of security precautions are put into place, passenger rail systems that move high volumes of passengers daily remain vulnerable to attack. DHS components have taken steps to assess the risks to the passenger rail system. However, enhanced federal leadership is needed to help ensure that actions and investments designed to enhance security are properly focused and prioritized so that finite resources may be allocated appropriately to help protect all modes of transportation. Specifically, both DHS and TSA should take additional steps to help ensure that the risk management efforts under way clearly and effectively identify priority areas for security-related investments in rail and other transportation modes. TSA has not yet completed its methodology for determining how the results of threat, criticality, and vulnerability assessments will be used to identify and prioritize risks to passenger rail and other transportation sectors. Until the overall risk to the entire transportation sector is identified, TSA will not be able to determine where and how to target limited resources to achieve the greatest security gains. Once risk assessments for the passenger rail industry have been completed, it will be critical to be able to compare assessment results across all transportation modes and make informed, risk-based investment trade-offs. It is important that DHS complete its framework to help ensure that risks to all sectors can be analyzed and compared in a consistent way. Until this framework is complete, it will be difficult for agencies to reconcile information from different sectors to allow for a meaningful comparison of risk. Apart from its efforts to identify risks, TSA has taken steps to enhance the security of the passenger rail system. The issuance of security directives in 2004 was a well-intentioned effort, but did not provide the industry with security standards based on industry best practices. It is also not clear how TSA will enforce these directives. Consequently, neither the federal government nor rail operators can be sure they are requiring and implementing security practices proven to help prevent or mitigate disasters. While foreign passenger rail operators face similar challenges to securing their systems and have generally implemented similar security practices as U.S. rail operators, there are some practices that are utilized abroad that U.S. rail operators or the federal government have not studied in terms of the feasibility, costs, and benefits. In our September 2005 report on passenger rail security, we recommended, among other things, that TSA establish a plan with timelines for completing its methodology for conducting risk assessments and develop security standards that reflect industry best practices and can be measured and enforced. These actions should help ensure that the federal government has the information it needs to prioritize passenger rail assets based on risk, and evaluate, select, and implement measures to help the passenger rail operators protect their systems against terrorism. In addition, we recommended that the Secretary of DHS, in collaboration with DOT and the passenger rail industry, determine the feasibility, in a risk management context, of implementing certain security practices used by foreign rail operators. DHS generally agreed with the report’s recommendations, but as of January 2007, the agency has not told us what specific actions they are taking to implement them. We will continue to assess DHS and DOT’s efforts to secure the U.S. passenger rail system during follow-on work to be initiated later this year. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512- 3404. Individuals making key contributions to this testimony include John Hansen, Assistant Director, Chris Currie, and Tom Lombardi. Passenger Rail Security: Evaluating Foreign Security Practices and Risk Can Help Guide Security Efforts. GAO-06-557T. Washington, D.C.: March 29, 2006. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-06-181T, Washington, D.C.: October 20, 2005. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-05-851. Washington, D.C.: September 9 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Rail Security: Some Actions Taken to Enhance Passenger and Freight Rail Security, but Significant Challenges Remain. GAO-04-598T. Washington, D.C.: March 23, 2004. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Rail Safety and Security: Some Actions Already Taken to Enhance Rail Security, but Risk-based Plan Needed. GAO-03-435. Washington, D.C.: April 30, 2003. Transportation Security: Post-September 11th Initiatives and Long-term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges. GAO-03-263. Washington, D.C.: December 13, 2002. Mass Transit: Challenges in Securing Transit Systems. GAO-02-1075T. Washington, D.C.: September 18, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The July 2005 London subway bombings and July 2006 rail attacks in Mumbai, India dramatically revealed the vulnerability of passenger rail and other surface transportation systems worldwide to terrorist attack and demonstrated the need for increased focus on the security of these systems. This testimony, which is based primarily on GAO's September 2005 report on passenger rail security (GAO-05-851) and selected program updates obtained in January 2007 provides information on (1) how the Department of Homeland Security (DHS) has assessed the risks posed by terrorism to the U.S. passenger rail system; (2) actions TSA and other federal agencies have taken to enhance the security of U.S. rail systems; and (3) rail security practices implemented by domestic and selected foreign passenger rail operators. The DHS Office of Grants and Training has conducted risk assessments of passenger rail systems to identify and protect rail assets that are vulnerable to attack, such as stations and bridges. TSA has also begun to conduct risk assessments of passenger rail assets. While TSA has begun to establish a methodology for analyzing and characterizing risks, as of January 2007, the agency has not completed a comprehensive risk assessment of the U.S. passenger rail system. Until TSA does so, the agency may be limited in its ability to prioritize passenger rail assets and help guide security investments. DHS has also begun developing a framework to help agencies and the private sector develop a consistent approach for analyzing and comparing risks among and across different transportation sectors. However, until this framework is finalized, it may not be possible to compare risks across different sectors, prioritize them, and allocate resources accordingly. After September 11, 2001, the Department of Transportation initiated a number of efforts to improve passenger rail security. After its creation, TSA also took a number of actions, including issuing rail security directives, testing rail security technologies, developing training tools for rail workers, and issuing a proposed rule in December 2006 regarding passenger and freight rail security, among other efforts. However, federal and rail industry stakeholders have questioned the extent to which TSA's directives were based on industry best practices and expressed confusion about how TSA would monitor compliance with the directives. DHS and DOT also signed a memorandum of understanding (MOU) that delineated the two departments' respective roles and responsibilities for promoting the safe, secure, and efficient movement of people and goods throughout the transportation system. TSA has recently completed specific agreements with the Federal Transit Administration (FTA) and the Federal Railroad Administration (FRA) to further delineate security-related roles and responsibilities for passenger rail. U.S. and foreign passenger rail operators GAO visited have also taken actions to secure their rail systems. Most had implemented customer security awareness programs, increased security personnel, increased the use of canines to detect explosives, and enhanced employee training programs. GAO also observed security practices among foreign passenger rail systems that are not currently used by U.S. rail operators or by the U.S. government, which could be considered for use in the U.S. For example, some foreign rail operators randomly screen passengers or use covert testing to help keep employees alert to security threats. While introducing these security practices in the U.S may pose political, legal, fiscal, and cultural challenges, they warrant further examination. TSA has reported taking steps to identify foreign best practices for rail security.
Through OoC, USCIS manages the programs and activities within DHS that are most directly associated with the civic integration of immigrants. Created by the Homeland Security Act of 2002, OoC’s mandate is to promote instruction and training on citizenship responsibilities for immigrants interested in becoming naturalized citizens of the United States. To accomplish this, OoC competitively awards grants to help immigrant-serving organizations implement citizenship preparation programs aimed at promoting civic integration through naturalization; conducts outreach and public education activities to promote and raise awareness of citizenship and immigrant integration; and develops educational resources on the naturalization process for lawful permanent residents and organizations that help prepare immigrants for citizenship. Outreach activities may include providing technical assistance to providers of citizenship education services; conducting training workshops for adult educators and volunteers on how to prepare immigrants for citizenship; making presentations to educate and inform immigrants about the process of becoming a U.S. citizen; and attending conferences, organizing special naturalization ceremonies, and participating in public events to help promote an awareness of citizenship and immigrant integration. Table 1 summarizes USCIS’s programs and activities that aim to support the citizenship aspect of immigrant integration and the goals for those programs and activities. To implement its immigrant integration programs and activities, OoC uses funds from two sources—funds that are provided through the appropriations process and funds that are allocated from USCIS’s Immigration Examinations Fee Account. This account consists of fees that USCIS collects from persons filing for immigration benefits (for example, the fees charged to persons who file for naturalization) and deposits into the fee account. As shown in table 2, OoC uses (1) appropriated funds to support its grant program, including the grant awards, grant program staff salaries and other grant administration expenses, as well as citizenship public education and awareness activities and (2) fee account funds for non-grant program staff salaries and expenses, and activities including citizenship educational materials development and dissemination, teacher training, naturalization test implementation, and other operational expenses. Importantly, nearly half of OoC’s budget over the past 3 years—$19.8 million of $42.6 million (46 percent)—was allocated to its grant program, which includes the funds awarded and program administration and operational expenses for the grant program. In addition to OoC’s immigrant integration activities, USCIS’s Office of Public Engagement and community relations officers within USCIS’s district and field offices devote a portion of their time to conducting citizenship-related outreach activities. Specifically, the Office of Public Engagement assists OoC with carrying out its citizenship outreach initiatives on a national level and, at the local level, USCIS districts are required each quarter to conduct at least one citizenship-related outreach activity in the form of a naturalization information session. Providing immigrant integration support to immigrants is a multifaceted effort that is dispersed across governmental and nongovernmental sectors. At the federal level, a wide array of federal programs provide assistance to immigrants and support various aspects of immigrant integration, but those programs are not specifically categorized as directly supporting integration. Based on data that the Office of Management and Budget collected in September 2010, 13 federal agencies across the federal government reported offering a total of 79 programs that either directly or indirectly supported immigrant integration. These federal agencies self-identified programs they perceived as supporting immigrant integration because, according to OoC, there is no standard programmatic definition for immigrant integration. As part of OoC’s review of these data, it placed these 79 programs into four categories it identified as broad areas of immigrant integration, as shown in table 3. The federal programs reported providing immigrant integration support by, among other things, making grants, establishing partnerships, and providing direct services. Civic integration included OoC’s citizenship programs; economic integration included refugee resettlement assistance provided by the Department of Health and Human Services; and linguistic integration included English language acquisition grants provided to states by the Department of Education. The data do not represent all federal programs that support immigrant integration and do not provide a complete estimate of federal funding because a number of programs did not report a funding amount. Additionally, agency officials self-identified the programs they perceived as supporting integration.information collected by the Office of Management and Budget included some programs that served the general population but included immigrant populations as a subset. For example, the Department of Agriculture’s National School Lunch Program offers low-cost or free lunches to children from low-income families, which includes immigrants and nonimmigrants. Further, the Additional examples of federal, state, and local immigrant integration programs and efforts are described in appendix II. From 2008 to 2011, OoC reported conducting more than 300 significant outreach events to promote citizenship awareness and civic integration and establish partnerships with governmental and nongovernmental organizations to help encourage immigrants’ civic integration. Significant outreach efforts could include conferences, special naturalization ceremonies organized by OoC, meetings and training events, and presentations to encourage immigrants to become more integrated into American civic culture. Based on OoC’s fiscal year 2011 quarterly reports, examples of its significant outreach activities included meetings with representatives from the Colorado Immigrant Rights Coalition to discuss state and local immigrant integration initiatives and discussions with the Colorado African Organization on OoC’s tools and how they could be used to promote citizenship and immigrant integration. OoC also met with the National League of Cities’ to discuss how OoC can provide support in their efforts to promote citizenship and immigrant integration. OoC reported that since fiscal year 2008, it has held 86 citizenship education training workshops for nearly 6,000 adult educators and volunteers working with immigrants across the country. OoC training workshops are designed to enhance the skills needed to teach U.S. history, civics, and the naturalization process to immigrant students. For the 36 requests for training workshops that OoC reported receiving in fiscal year 2011, it conducted 32 training workshops across 22 states.fiscal year 2011, as part of its Citizenship Public Education and Awareness Initiative, OoC launched public service announcements to raise awareness about the rights, responsibilities, and importance of U.S. citizenship and the free educational tools and resources available to help eligible permanent residents prepare for citizenship. In addition to OoC, the community relations officers in USCIS district and field offices conduct citizenship outreach. USCIS field offices reported that during fiscal year 2011, they held 444 naturalization information sessions for more than 22,600 attendees. All representatives we interviewed from 18 governmental and nongovernmental organizations, including grantees and subgrantees, told us that USCIS’s naturalization information sessions have helped reach lawful permanent residents eligible to naturalize and influenced their preparation for and decision to become citizens. USCIS has also sponsored special naturalization ceremonies across the country, and since 2006, it has partnered with the National Park Service to hold ceremonies at 22 national park sites. OoC offers a variety of free publications and web-based resources to educate immigrants on the citizenship and naturalization process; help adult educators and organizations prepare immigrants for acquiring citizenship; and help facilitate a smoother transition for immigrants into their communities. Some examples of the publications and web-based resources offered by OoC and agencies with which OoC has formed partnerships to develop and enhance these resources are provided in table 4. As of August 2011, OoC reported distributing over 29,000 copies of its Civics and Citizenship Toolkit. In March 2010, OoC distributed the Toolkit to all public libraries in the City of Los Angeles to help librarians assist eligible immigrants who are seeking naturalization. This effort was part of the partnership between OoC and the City of Los Angeles to promote citizenship and civic integration. All representatives we interviewed from 18 governmental and nongovernmental organizations, including grantees and subgrantees serving immigrants, told us that they have used OoC’s publications to provide immigrants with information on becoming U.S. citizens and getting settled in the United States. Some also told us that they rely on OoC’s educational resources to provide consistent information to immigrants on the naturalization process, and some grant recipients include them in their English and citizenship classes. Some also stated that USCIS’s redesign of the naturalization test and OoC’s naturalization test study materials have helped immigrants prepare for naturalization and relate to basic concepts about the structure of government and American history. Additionally, several of the local government officials we spoke to indicated that because their offices often serve as clearinghouses for immigrant communities and refer individuals to local services or community-based organizations to assist them with various aspects of immigrant integration, including citizenship, they often access USCIS’s Citizenship Resource Center website for information when responding to immigrant requests for assistance on the naturalization process. Through OoC’s Citizenship and Integration Grant Program, which provides support for citizenship education and naturalization preparation, USCIS aims to help immigrants become civically integrated members of their communities. USCIS officials told us that the agency’s role in immigrant integration is limited to involvement in civic integration, with programs and initiatives designed to support immigrants on the path to citizenship, because USCIS has no legislative directive mandating it to support other aspects of integration. Further, OoC officials told us that the agency faces uncertainty from year to year as to whether the program will continue to exist, as the grant program has no authorizing statute and operates under annual DHS appropriations. With funding in fiscal years 2009 through 2011, OoC provided grants to a myriad of governmental and nongovernmental organizations, including public school systems, community colleges, community and faith-based organizations, adult education organizations, public libraries, and literacy organizations, under the following grant categories: Direct Services Grant – Citizenship Instruction Only. This grant provides funding to help grantees prepare lawful permanent residents for the civics and English (reading, writing, and speaking) components of the naturalization test. Grantees are required to provide U.S. history and government instruction and civics-focused English as a second language instruction. Direct Services Grant – Citizenship Instruction and Naturalization Application Services. This grant funds activities aimed at providing the citizenship instruction discussed above, as well as assisting lawful permanent residents with completing their naturalization applications and preparing them for the naturalization interview. National Capacity Building Grant. This grant is intended to provide federal funding to eligible national, regional, or statewide organizations with multiple sites to build capacity among their local affiliates/members to promote immigrant integration through direct citizenship services to lawful permanent residents. The funds are intended to provide support for organizations’ program management, organizational capacity building, and technical assistance, as well as for affiliates/members to develop and implement sustainable local citizenship preparation programs. During the first year of the grant program in fiscal year 2009, OoC received 293 applications for citizenship grant funds and made competitive 1-year grant awards totaling $1.2 million to 13 organizations to help them improve and enhance their existing citizenship assistance programs. For this first year of the grant program, OoC reported that nearly 55,000 immigrants were provided outreach, received direct citizenship services, or both. Of the approximately 55,000 immigrants who received services from grantees, OoC reported that about 50,000 received information on citizenship preparation and the rights and responsibilities of citizenship provided through outreach activities and it was estimated that at least 5,000 participated in a citizenship education class or received assistance with the naturalization application. Specifically, grantees used funds to provide services to a range of immigrants defined by USCIS as priority immigrant groups, including elderly immigrants in retirement communities in San Diego, California; low-income immigrants in New York; and refugees with low literacy levels in St. Louis, Missouri. Grantees also built community partnerships to strategically increase their impact. For example, one grantee in Providence, Rhode Island, used grant funds to strengthen a consortium of five immigrant-serving organizations. This consortium accounted for 28 percent of all fiscal year 2009 naturalizations reported under the grant program. For fiscal year 2010, the second year of the grant program, OoC received 365 applications and made competitive 1-year grant awards totaling $8 million to 56 organizations. Of these, 48 grants were awarded to help organizations provide direct services to lawful permanent residents, and 8 grants were awarded to help national immigrant-serving organizations with member/affiliate structures provide technical assistance to increase the long-term capacity of the subapplicants to provide direct services to lawful permanent residents. As of the fourth quarter, which ended on September 30, 2011, these organizations reported that, for fiscal year 2010, more than 21,480 immigrants were provided services and about 12,747 immigrants had enrolled in a citizenship education instruction course. Based on quarterly reports submitted by grantees to OoC, grantees reported providing naturalization application services to about 15,094 program participants of whom about 7,277 submitted an application for naturalization and 3,122 had naturalized. Fiscal year 2010 was the first year that OoC awarded national capacity building grants specifically designed to allow organizations to establish or enhance local citizenship preparation programs through eligible service providers. Examples of local capacity building activities proposed by grantees included addressing the unmet educational needs of low-income adults with limited English proficiency and literacy in Atlanta, Georgia, and Nashville, Tennessee; Vietnamese immigrant communities in Houston, Texas; and refugees and immigrants in Erie, Pennsylvania, and Raleigh, North Carolina, among others. For fiscal year 2011, the third year of the grant program, OoC received 324 applications and, in September 2011, made competitive 2-year grant awards totaling $9 million to 42 organizations. Specifically, of the 106 applicants for Citizenship Instruction only grants, USCIS awarded approximately $1.6 million to 11 organizations. Of the 195 applications for Citizenship Instruction and Naturalization Application services grants, USCIS awarded approximately $5.6 million to 28 organizations, and of the 23 applicants for the National Capacity Building grant, USCIS awarded approximately $1.8 million to 3 organizations. Several of the grantees and subgrantees we spoke with told us that OoC’s grant program has helped them to, among other things, address their clients’ need to improve their English language skills so they can pass the naturalization test. In Chicago, Illinois, where we met with two organizations that received direct service grants, representatives from one organization told us it was using funds to established two additional citizenship instruction courses aimed at low- and preliterate Latinos with less than a first grade reading level. They also told us that the curriculum for these courses was developed to achieve a seventh grade reading level, which the organization identified as the level needed to pass the naturalization test. An additional organization had developed and disseminated information specifically for lawful permanent residents who have suffered from domestic violence, persecution, and other abuses, which can interfere with their ability to seek assistance for acquiring citizenship. Another grantee in Los Angeles, California, had directed funds to serve lawful permanent residents who had suffered persecution, working with local faith-based groups to reach out to these individuals. A direct services grantee in Baltimore, Maryland, told us it used funds to sustain its ability to provide tuition-free English language and citizenship instruction because its previous funding sources had been cut. The participants who we spoke with at this site said that their reasons for participating in the program were to learn English so they could communicate better and be self-sufficient and to obtain citizenship so they could gain better employment and higher education opportunities. One participant told us that the free instruction she was receiving helped increase her English proficiency and motivated her to seek other opportunities to continue learning. She also indicated that she had submitted an application for naturalization and continues to prepare for the citizenship test as a result of the citizenship instruction that she received. Of the three organizations we contacted that had received funding under the National Capacity Building grant, representatives from one national capacity building organization told us they are expanding services to Haitians and Cubans in Greensboro, North Carolina. Representatives from a subgrantee told us that the grant has allowed them to access updated and innovative materials, curricula, and teaching methods and helped them to develop the capability for expanding the grantee’s program from providing only legal and court services to Iranians, Iraqis, and Armenians to also establishing the ability to provide additional services to clients on the path to citizenship. Representative from another organization told us that they planned to provide services to over 1,700 immigrants through four subgrantees establishing citizenship services under the grant. Grantees can use their Citizenship and Integration Program grant funds on a variety of activities. For reporting purposes, OoC classifies these eligible activities into (1) citizenship instruction (e.g., instruction in English as a second language and U.S. history and government); (2) outreach and training (e.g., staff and volunteer training); and (3) naturalization application services (e.g., assistance with preparing and completing naturalization applications), as shown in table 5. For each of these categories of activities, OoC has required grantees to collect and report data on program outputs, which measure the quantity of program activities and other deliverables, such as the number of participants enrolled in grantees’ citizenship instruction and naturalization preparation programs. In addition to these outputs, OoC has collected some information on outcome measures to demonstrate the extent to which grantees’ programs are helping program participants complete the naturalization process, such as collecting data on participants’ naturalization examination results and the proportion of participants who received grantees’ services and self-reported that they naturalized during the year of the grant program. In January 2011, USCIS reported on the results of its measures for the fiscal year 2009 grant program based on data submitted by grantees. However, USCIS identified limitations on the reported results. Specifically, USCIS reported that its data on the number of participants who received naturalization application services and passed the naturalization examination, and who ultimately became naturalized, were incomplete in part because grantees relied on data that were self-reported by program participants, and not all program participants reported to grantees whether they passed the naturalization examination and naturalized. Further, there was a time lag between when program participants received naturalization application services and when they passed their naturalization examinations and became naturalized. For example, USCIS reported that for fiscal year 2009, about 1,804 participants received naturalization application services and submitted a naturalization application during the 1-year grant performance period. Of the 1,804 participants, OoC estimated that about 46 percent submitted the application during the third and fourth quarters of the year. USCIS reported that because its average time for completing the processing of naturalization applications was 4.7 months, it was possible that those program participants who applied for naturalization toward the end of the year naturalized after the grant performance period ended. USCIS did not ask grantees to collect information on naturalization examination results and the number of naturalizations of program participants that occurred after the grant period ended. To help address this issue, for the fiscal year 2010 and 2011 grant programs, OoC provided additional guidance and technical assistance to grantees on how to collect and report program data. These included holding training sessions on grant program reporting guidelines, the types of reports to use in collecting and reporting data on a quarterly basis, and strategies for compiling data and activities from the grant performance period to prepare and submit final reports to OoC. Additionally, OoC provided grantees with 3 additional months beyond the end of the grant performance period to collect and report information on the number of participants who passed the naturalization examination and naturalized. However, USCIS continues to face two inherent challenges in collecting complete data on grantees’ performance. First, grantees may require program participants to provide them with information on their naturalization examination results and naturalization status. However, according to USCIS, it is not feasible for grantees to obtain data on the naturalization examination results and naturalization status for all participants served through grant-funded programs because, among other things, participants may choose not to report their results to grantees or may decide not to naturalize. USCIS has instructed grantees to develop a plan for working to obtain self-reported data for all program participants, but it acknowledges that grantees may not be able to obtain complete data from all program participants. Second, while USCIS has extended the time period for grantees to report program data, USCIS may not complete the processing of naturalization applications submitted by program participants near the end of the performance period, given the average reported application processing time of 4.7 months. To further strengthen its measurement of the performance of its grant programs, USCIS announced its plan to conduct an evaluation of the Citizenship and Integration Grant Program in the fiscal year 2011 grant solicitations. However, USCIS has not yet conducted such an evaluation. In January 2011, USCIS drafted a statement of work for a contractor to refine the strategic plan for the grant program and develop an evaluation plan that would allow USCIS to measure the grant program’s performance and long-term impact, and which may provide options to help address these limitations. According to USCIS, it did not complete this statement of work or award a contract for an evaluation plan because, at that time, the agency was uncertain whether it would receive appropriations in fiscal year 2011 to continue the grant program. DHS’s fiscal year 2011 appropriations act, enacted in April 2011, allowed the use of appropriated funds for the grant program. However, USCIS did not proceed with finalizing its statement of work or contracting for development of an evaluation plan because it was unsure of whether it would receive funding in fiscal year 2012 for the grant program. USCIS requested about $19.7 million for fiscal year 2012 to fund through appropriations all OoC’s programs and activities, including the grant program. In November 2011, USCIS reported that it plans to conduct an internal evaluation of the grant program in fiscal year 2012 by, among other things, assessing grantee data against stated program goals, program assumptions, inputs, program activities, output targets, and outcomes. According to USCIS, these data will be used to determine how resources, activities, and outputs link together to meet short-term performance metrics and longer-term outcomes (program goals) of the grant program. Further, contingent on the availability of funds, USCIS reported that in fiscal year 2013 it plans to contract for an external evaluation of the overall grant program. USCIS intends for the contractor to examine how well the current evaluation methodology measures the program’s success in meeting its goals, identify what aspects of the program contributed to those achievements, and discover what barriers exist to the program achieving its ideal results. Additionally, USCIS intends for the contractor to make recommendations for improving evaluation methods for the grant program and the effectiveness of program administration. Internal and external evaluations, such as those that USCIS has announced its intention to implement, could help the agency reassess the goals, objectives, and measures of its grant program, including helping to address inherent challenges with USCIS’s current measures, and better evaluate the extent to which the program is achieving those goals and objectives. Based on its uncertainty about whether the grant program will continue to receive funding, USCIS has not yet established interim milestones for its internal and external evaluations of the grant program, such as milestones for initiating and completing the evaluations. Although USCIS has stated its intention to conduct grant program evaluations in fiscal years 2012 and 2013, USCIS also announced plans for an external evaluation in fiscal year 2011, but as indicated, did not initiate or complete that evaluation. Program management standards state that successful execution of any program includes developing plans that include a timeline for program deliverables. Standards for Internal Control in the Federal Government and the Office of Management and Budget also call for agencies to have performance measures and indicators that are linked to mission, goals, and objectives to allow for comparisons to be made among different sets of data so that corrective actions can be taken if necessary.of the grant program is a good way to objectively determine whether the current program framework is achieving stated program goals, whether grantees meet desired performance outcomes, how various program implementation characteristics might correlate to other indicators of program success, and whether the grant program should continue. To the Further, according to USCIS, a program-specific evaluation extent that USCIS receives funding in fiscal years 2012 and 2013 for the grant program, initiating the planned internal and external evaluations could provide USCIS with a mechanism for better evaluating its grant program. By setting interim milestones for these evaluations, USCIS could strengthen its planning efforts to develop and implement the evaluations. To date, no single federal entity has been designated to lead the creation, implementation, and coordination of a national immigrant integration capability. Immigrant integration efforts are dispersed across federal, state, and local governments, as well as nongovernmental organizations. In the absence of federal coordination, officials in city governments and representatives from nongovernmental organizations told us that they faced challenges in carrying out their immigrant integration efforts. For example, one representative of a community-based nongovernmental organization said that immigrant integration efforts vary in different regions in the country, and that it would be helpful if the federal government had better guidelines on what constitutes immigrant integration and what is expected of organizations providing immigrant integration services. Another nongovernmental organization representative noted that a lack of coordination in immigrant integration has resulted in a number of nonprofit organizations competing for funds, such as for language classes serving noncitizens with different levels of English proficiency. Additionally, government officials for three cities noted that in the absence of federal guidance for immigrant integration, state and local governments have been setting immigration policies independently, some of which set a negative tone toward immigrants, making it difficult to successfully integrate immigrants. Officials in one of the three cities added that this may adversely affect the attitudes of immigrant populations toward government, even when the immigrants do not reside in those places. Our previous work has highlighted the benefits of actions that selected federal agencies have taken to enhance and sustain collaborative efforts, including the ability to leverage resources, improve quality, and expand services. All representatives we interviewed from 15 governmental and nongovernmental offices indicated a need for a national immigrant integration strategy, federal coordination for immigrant integration efforts, or both. For example, some representatives said that the federal government could help stakeholders forge nationwide partnerships and learn about best practices, and that a national strategy would help develop a more consistent approach to immigrant integration. Also, organizations such as the Migration Policy Institute, the National League of Cities, and the Massachusetts Immigrant and Refugee Advocacy Coalition have called for a federal immigrant integration strategy. There has been recognition that improved coordination of immigrant integration efforts would be beneficial, and there have been calls at the federal level to develop a national immigrant integration capability. For example, in 2006, a presidential executive order established a task force, chaired by the Secretary of Homeland Security and comprising representatives from 11 federal departments, including DHS, to provide direction to the federal government and make recommendations to the President on immigrant integration. The task force was also to provide direction to executive departments and agencies on integration, particularly through instruction in English, civics, and history. The task force’s 2008 report called for a national integration effort and stated that federal institutionalization of immigrant integration would lend credibility and support to federal, state, and local governments and other sectors of society. DHS officials said that DHS facilitated the task force’s activities and led the effort to produce a final report, but no agency was designated as the leader for a national immigrant integration effort. The task force, while still technically active, has not met since the issuance of the report in December 2008, according to DHS officials. Additionally, DHS’s 2010 Quadrennial Homeland Security Review Report states that one of DHS’s goals is to strengthen and effectively administer the immigration system by promoting the integration of lawful immigrants. This is to be carried out by providing leadership, support, and opportunities to immigrants to facilitate their integration into American society. The Quadrennial Homeland Security Review Report notes that immigrant integration requires leadership, but it does not delineate a framework for accomplishing this. Instead, the Quadrennial Homeland Security Review Report notes that homeland security-related functions are dispersed and decentralized and DHS is just one of several components involved in carrying out the Quadrennial Homeland Security Review’s strategic framework. OoC officials told us that coordinating immigrant integration activities nationwide could help immigrants navigate federal programs, contribute to the development of a federal strategy and policy guidance, help set measurable goals for immigrant integration, create opportunities for the federal government to liaise with state governments and nongovernmental organizations, and facilitate sharing best practices and leveraging public-private partnerships. We previously reported that achieving meaningful national results in many policy and program areas requires a combination of coordinated efforts among various actors across federal agencies and among state, local, Such coordination requires and nongovernmental organizations.leadership commitment, agreed-upon goals and strategies, clearly identified roles and responsibilities, and compatible policies and procedures to be effective. USCIS officials stated that they believe that DHS is uniquely situated to coordinate a multiagency effort given its competencies in areas such as immigration services, immigrant integration resources, enforcement, community security, among other things. For example, according to USCIS officials, the agency has access to all foreign-born individuals going through the U.S. immigration process. USCIS has also established relationships with a number of nongovernmental organizations involved in immigrant integration through OoC’s outreach efforts and its grant program. Further, USCIS has engaged in dialogue at the local level and established partnerships at the local level. For example, USCIS and the City of Los Angeles signed a 2010 letter of agreement to promote citizenship awareness, education, and outreach events throughout the city. USCIS officials acknowledged that USCIS’s resources and authority for undertaking such an effort are limited. The officials also said that since the release of the presidential task force’s 2008 report, DHS’s role in immigrant integration has been limited to those aspects of civic integration, such as citizenship and promoting the rights and responsibilities of citizenship, discussed earlier in this report. We were unable to meet with officials from the White House Domestic Policy Council for the purposes of this report. in June 2010 to assess the roles and activities of the federal government in promoting immigrant integration and to better coordinate integration efforts across agencies. According to DHS officials, USCIS is an active participant but not the lead agency in the council’s meetings. Based on OoC’s 2010 report to the Office of Management and Budget, the working group developed recommendations in support of a federal integration strategy and consolidation of informational resources, programs, and research through interagency collaboration. OoC’s report to the Office of Management and Budget also stated that in late 2010, the Domestic Policy Council disbanded the working group and convened the New Americans – Citizenship and Integration Initiative consisting of several members of the working group, and led by the Interagency Steering Committee.distilled the recommendations into three key immigrant integration policy areas: civic integration (naturalization and civic participation), economic integration (employment and economic advancement opportunities), and linguistic integration (learning English to facilitate daily life and support economic and social advancement). The steering committee also developed a 2011 action plan to guide the development of strategic initiatives in the three areas. However, according to DHS officials, a timeline for implementation of the recommendations has not been finalized, and any associated budget or planning process has not yet started. Because the work of this group has not yet been completed, it is too early to know if, and to what extent, it will provide leadership for a national immigrant integration capability. Integrating immigrants into American society has economic, social, and security implications. We found numerous examples of how USCIS’s integration-related programs are helping immigrant populations. In addition, USCIS has taken action to develop and use mechanisms for collecting information on the outputs and outcomes of its integration- related programs, particularly its Citizenship and Integration Grant Program, which is OoC’s largest single budget activity. USCIS has faced inherent limitations in collecting complete data on grantees’ performance and has stated its intention to conduct an internal and external evaluation of the grant program, contingent on the program receiving future appropriations. Establishing interim milestones for such evaluations, including milestones for initiating and completing the evaluations, could help USCIS strengthen its planning efforts for the program. To strengthen USCIS’s plans for evaluating the Citizenship and Integration Grant Program, we recommend that to the extent that USCIS receives program funding in fiscal years 2012 and 2013, the Director of USCIS establish interim milestones for conducting the planned internal and external evaluations of the grant program. We provided a draft of this report to DHS for review and comment on November 23, 2011. On December 9, 2011, DHS provided written comments, which are reprinted in appendix III. In commenting on the draft report, DHS concurred with our recommendation that USCIS establish interim milestones for conducting its planned internal and external evaluations of the grant program, and identified actions planned or under way to implement the recommendation. DHS stated that to the extent that USCIS receives appropriated program funding and is allowed to use the funding for evaluation purposes, it would establish interim milestones for conducting an internal evaluation of the grant program in fiscal year 2012 and an external evaluation of the grant program in fiscal year 2013. DHS also provided additional information on the steps that USCIS’s OoC and Office of Policy and Strategy will take to jointly determine the scope of the evaluations. We believe that DHS’s proposed actions are consistent with the intent of the recommendation and should help strengthen USCIS’s planning effort for the grant program. DHS also provided written technical comments, which we considered and incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Department of Homeland Security, relevant congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report examines (1) the steps the U.S. Citizenship and Immigration Services (USCIS) has taken to implement immigrant integration programs and the extent to which it has assessed the effectiveness of its grant program and (2) the federal mechanism to coordinate governmental and nongovernmental immigrant integration efforts. To determine the steps USCIS has taken to implement immigrant integration programs and the extent to which it has assessed the results of its grant program, we reviewed program activities in USCIS because the focus on immigrant integration ties into its mission. We reviewed the Homeland Security Act of 2002, which created USCIS’s Office of Citizenship (OoC) to promote instruction and training and training on citizenship responsibilities. We also examined documentation on mission objectives and performance measures identified by the Department of Homeland Security (DHS) and USCIS on immigrant integration, including DHS’s strategic plan for fiscal years 2008 through 2013, 2010 Quadrennial Homeland Security Review Report, fiscal years 2011 and 2012 Budget in Brief reports, and annual performance report for fiscal years 2010 through 2012; USCIS’s strategic plan for fiscal years 2008 through 2012 and fiscal year 2011 strategic goals and select initiatives; and USCIS’s Citizenship and Integration Grant Program requirements and guidance. We interviewed DHS senior officials from DHS’s Office of Policy Development and USCIS offices, including OoC, the Office of Public Engagement, the Office of Policy and Strategy, and the Field Operations Directorate, to discuss their roles and responsibilities in how the department defines and designates immigrant integration activities and how it identifies and addresses immigrant integration needs. We reviewed USCIS documentation and educational materials on the U.S. citizenship and the naturalization process and strategies for outreach activities in support of immigrant integration, including the Civics and Citizenship Toolkit and the WelcometoUSA.gov website. To identify and observe USCIS’s immigrant integration activities and community-level efforts by cities and immigrant-serving organizations, we selected a nonprobability sample of 10 locations for site visits and telephone interviews. We visited Baltimore, Maryland; Chicago, Illinois; Washington, D.C.; and Los Angeles, California. We conducted semistructured telephone interviews with officials in New York, New York; Boston, Massachusetts; Richmond, Virginia; Seattle, Washington; Houston, Texas; and Miami, Florida. Using estimates from the U.S. Census Bureau’s 2010 American Community Survey and data from DHS’s Office of Immigration Statistics, we selected these 10 locations based on various factors, including the total number of foreign-born residents, whether the region had a mixture of nationalities, the number with limited English proficiency, the number of lawful permanent residents eligible for naturalization, and recognition accorded the locations’ immigrant integration programs. Additionally, we took into account the proximity of USCIS offices and nongovernmental immigrant-serving organizations, as well as organizations awarded Citizenship and Integration Program grant funds in fiscal years 2009 or 2010. During our site visits, we interviewed officials in USCIS’s Baltimore, Chicago, Los Angeles, and Washington, D.C., district offices to identify the role played by these offices in implementing USCIS local immigrant integration activities. In these four cities, and via telephone in the other six cities, we interviewed representatives of 10 community-based organizations, including grantees and subgrantees, and officials in government immigrant integration offices about their efforts to foster immigrant integration and how local efforts benefit from federal immigrant integration initiatives. Although our site visit and telephone interview results cannot be generalized to other locations with foreign-born populations, they provided us with valuable insights about actions USCIS has taken to support immigrants’ integration, how USCIS grant recipients are using award funds to support integration, and actions taken by city governments to address integration needs. To identify the steps DHS has taken to assess the results of its immigrant integration efforts, we reviewed DHS documents with stated immigrant integration directives and performance targets, and interviewed officials and staff at the DHS Office of Policy Development and USCIS offices, including OoC, the Office of Public Engagement, the Office of Policy and Strategy, and the Field Operations Directorate, to obtain clarification on DHS’s efforts to delegate responsibilities and capture results of intra- agency actions to promote immigrant integration. We reviewed immigrant integration accomplishments detailed and summarized in DHS’s Budget in Brief reports for fiscal years 2011, and 2012. We reviewed OoC’s tracking information for its citizenship promotion and immigrant integration activities. We interviewed officials in USCIS district offices in the four cities we visited to obtain information about their citizenship promotion and immigrant integration outreach activities and the methods for capturing activity results. To determine how OoC assesses its immigrant integration efforts through its Citizenship and Integration Grant Program, we reviewed grantee performance results during the first funding round in fiscal year 2009, which had a 1-year performance period. We reviewed grantees’ reports to OoC on the number of participants registered for direct services citizenship and English instruction courses, naturalization applications filed, and participants naturalized, and we also examined program results reported by grantees and results obtained via OoC staffs’ on-site grantee performance monitoring during this performance period. We interviewed OoC staff on their performance monitoring methods and how they supported grantees through guidance and technical assistance. We corroborated this information in our interviews with select grantees in the four selected cities. Additionally, we viewed some of the citizenship instruction programs, obtained grantee performance reports submitted to OoC, and obtained documentation on the results of OoC’s on-site grantee monitoring efforts. Lastly, we interviewed some individual program participants to learn about how the program had affected their progress toward the goal of becoming naturalized, their perceptions on citizenship, and their views on integrating into their communities. We compared USCIS’s information on the results of its grant program against Standards for Internal Control in the Federal Government, which states, among other things, that managers should assess the quality of performance over time We reviewed all of and determine proper actions in response to findings. OoC’s programs and focused on the grant program because it was OoC’s single largest budget activity and the program that collected some data on outcomes, that is, the extent to which lawful permanent residents who were served by OoC passed the naturalization examination and naturalized. To determine what federal mechanism exists to coordinate governmental and nongovernmental immigrant integration efforts, we reviewed the 2008 report from the presidentially commissioned Task Force on New Americans to identify previous recommendations on providing leadership in immigrant integration and the extent to which the recommendations have been implemented. We reviewed DHS reports, such as OoC’s 2010 report, Improving Federal Coordination on Immigrant Integration, and its 2004 report, Helping Immigrants Become New Americans: Communities Discuss the Issues, to identify DHS’s findings on the extent to which federal leadership is needed, and in what areas. We reviewed DHS’s Quadrennial Homeland Security Review Report and the annual reports and strategic plans of DHS, USCIS, and OoC to identify DHS’s goals for providing federal leadership. We also interviewed key officials from DHS’s Office of Policy Development and USCIS offices, including OoC, the Office of Public Engagement, the Office of Policy and Strategy, and the Field Operations Directorate to identify the existing immigrant integration federal leadership and coordination activities for immigrant integration throughout the department. We interviewed officials in the four USCIS field offices mentioned above to identify the federal leadership role they played by USCIS field offices on a local level. We met with city officials and community-based organization representatives in each of the cities where the four USCIS field offices are located, and conducted semistructured telephone interviews with city government officials in the six additional cities mentioned above, to obtain their perspectives on the role of the federal government in supporting immigrant integration. The information from these interviews is not generalizable to all cities or nongovernmental organizations but provided valuable insights into how such organizations view the federal government’s support of immigrant integration. We reviewed laws and proposed legislation to identify existing and proposed policies for federal leadership in immigrant integration. We also consulted with outside researchers and reviewed their reports, including the Migration Policy Institute, the National League of Cities, and the National Conference of State Legislatures, to obtain their perspectives on immigrant integration issues faced by state and local governments and nongovernmental organizations and the extent to which the federal government has provided leadership in this area. We conducted this performance audit from September 2010 through November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal agencies have at times worked together on aspects of immigrant integration, such as civics and English language education for immigrants. For example, in 2011, an amendment to a 2010 memorandum of understanding was established between USCIS and the Smithsonian Institution National Museum of American History to develop a learning tool that supports aspiring citizens’ efforts to prepare for the civics portion for the naturalization test. Specifically, the National Museum of American History will provide instructional design and content expertise for the development of lesson plans and multimedia presentations that utilize content from the museum and other Smithsonian collections and will incorporate the civics questions that may be asked during the naturalization test. Separately, in 2010, USCIS provided funding to the National Institute for Literacy, for the expansion of the America’s Literacy Directory, a web-based directory about literacy programs, to incorporate citizenship preparation programs and classes. Also, as of July 2009, USCIS and the Department of Education had in place an interagency agreement to support a web-based tool for lessons in civics-and citizenship-oriented English language learning, according to Homeland Security and Educational Officials. State and local governments have also taken action to develop policies and plans to foster immigrant integration. For example, governors in Illinois, Maryland, Massachusetts, New York, and Washington have issued executive orders specifically to address immigrant integration issues. These executive orders called for establishing an advisory body (e.g., a committee or a council) made up of key stakeholders, such as state and local government officials and representatives of nongovernmental organizations, to make policy recommendations on immigrant integration. The advisory bodies have made recommendations to, among other things, incorporate immigrant integration into state education, workforce, and financial service programs; promote English language proficiency and civics education; and centralize information available to immigrants upon their arrival to the community. A common objective of these executive orders was to develop policies and plans or recommendations to provide immigrants with the tools to become self- sufficient and contribute to their communities. Local governments have also responded to the needs of immigrants in their communities in ways that address immigrant integration issues. In some cases, local officials serve as liaisons between city offices on activities that foster immigrant integration. For example, the liaisons may encourage offices to provide information in multiple languages on workforce training, library services, or other available services or to incorporate activities geared toward immigrants into local social service programs. Other local governments provide information clearinghouses for immigrant communities, referring individuals to local services or community-based organizations that can assist with various aspects of immigrant integration, including English and citizenship classes, legal services, and vocational training. Across the country, governmental and nongovernmental organizations— including community-based groups, social service organizations, ethnic associations, local public school systems, universities and community colleges, refugee resettlement agencies, health centers and hospitals, religious institutions, unions, and law firms—have joined together to form coalitions to advocate for and serve as resources to immigrants and promote their integration into American society. One example is the Massachusetts Immigrant and Refugee Advocacy Coalition, which seeks to promote the rights and integration of immigrants and refugees through policy analysis and advocacy, institutional organizing, training and leadership development, and strategic communications and consists of more than 130 organizations. In Illinois, more than 120 organizations— including advocacy groups, religious institutions, and neighborhood associations—make up the Illinois Coalition for Immigrant and Refugee Rights, which seeks to promote full and equal participation in the civic, cultural, social, and political areas by immigrants. Both the Massachusetts and the Illinois coalitions are members of the National Partnership for New Americans, a nationwide alliance of 12 immigrant rights coalitions seeking to support citizenship and the integration of immigrants into American communities through outreach to immigrant groups, assistance with capacity building for small organizations in remote areas, and policy advocacy, among other things. The National Partnership for New Americans hosts an annual national immigrant integration conference, with past participation from representatives of the White House Domestic Policy Council, USCIS, and other governmental and nongovernmental organizations. In addition, the National League of Cities’ Municipal Action for Immigrant Integration program seeks to promote civic engagement and naturalization in cities and towns across the United States by providing resources and technical assistance and serving as an information clearinghouse for best practices. The Migration Policy Institute’s National Center on Immigrant Integration Policy, in partnership with the J.M. Kaplan Fund, is also providing annual monetary awards during a three year period to outstanding immigrant integration initiatives led by nonprofit or community organizations, businesses, public agencies, religious groups, or individuals. In addition to these broader initiatives, individual organizations in communities across the nation are involved in efforts to support immigrant integration, for example, through English language instruction, workforce development, and legal services. In addition to the contact named above, Evi Rezmovic, Assistant Director; Yvette Gutierrez-Thomas; Danielle Pakdaman; and Mya Dinh made significant contributions to this report. David Alexander assisted with design and methodology. Linda Miller provided assistance in report preparation, and Frances Cook provided legal support.
In 2009, about 39 million foreign-born people lived in the United States. Immigrant integration is generally described as a process that helps immigrants achieve self-sufficiency, political and civic involvement, and social inclusion. The Department of Homeland Security’s (DHS) U.S. Citizenship and Immigration Services (USCIS) is responsible for a key activity that fosters political and civic involvement—the naturalization and citizenship process. USCIS’s Office of Citizenship (OoC) supports this process mainly through grants to immigrant-serving entities, but also with outreach activities and education materials. Other governmental and nongovernmental entities play a role in immigrant integration as well. GAO was asked to determine (1) the steps USCIS has taken to implement its integration programs and the extent to which it has assessed its grant program in particular, and (2) what federal mechanism exists to coordinate integration efforts. Among other things, GAO examined documentation on mission objectives and performance measures on immigrant integration and conducted interviews with officials in a nongeneralizable sample of cities and community-based organizations as well as senior USCIS officials about their immigrant integration efforts. USCIS has implemented immigrant integration efforts through outreach activities, educational materials, and a grant program, and established various measures for assessing its grant program, but has not yet set interim milestones for planned evaluations of the program. From 2008 to 2011, OoC reported conducting more than 300 significant outreach events to promote citizenship awareness and civic integration. Further, nearly half of OoC’s funding over the past 3 fiscal years—about $19.8 million—was spent on grants aimed at preparing immigrants for the naturalization process. The grants were made to a myriad of governmental and nongovernmental organizations, including public school systems and community and faith-based organizations. OoC has established various measures for assessing grantees’ performance under its grant program. These measures include, for example, the number of participants enrolled in grantees’ citizenship instruction and naturalization preparation programs, the number of participants who passed their naturalization examinations, and the proportion of participants who received grantees’ services and self-reported that they naturalized during the year of the grant program. However, USCIS has identified inherent limitations with these measures, such as that its data were incomplete in part because data were self-reported by program participants, and not all program participants reported to grantees whether they passed the naturalization examination and naturalized. In January 2011, USCIS drafted a statement of work for a contractor to develop an evaluation plan that would allow USCIS to measure the grant program’s performance and long-term impact, and this may help address these limitations. According to USCIS, it did not complete this statement of work or award a contract for an evaluation plan because, at that time, the agency was uncertain whether it would receive appropriations in fiscal year 2011 to continue the grant program, and the program has no authorizing statute. The final fiscal year 2011 law, enacted in April 2011, did allow the use of appropriations to fund the grant program, but USCIS did not proceed with developing an evaluation plan. In November 2011, USCIS reported that it plans to conduct an internal and external evaluation of the program in fiscal years 2012 and 2013, respectively, contingent on appropriations for the grant program. However, USCIS has not yet set interim milestones for these evaluations. Setting such milestones, contingent on the receipt of funding, could help USCIS strengthen its planning for conducting those evaluations, consistent with program management standards. GAO recommends that USCIS set interim milestones for an internal and external evaluation of its immigrant integration grant program, to the extent that it receives fiscal years 2012 and 2013 appropriations for the program. DHS concurred with our recommendation.
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss the subject of internal control. Its importance cannot be understated, especially in the large, complex operating environment of the federal government. Internal control is the first line of defense against fraud, waste, and abuse and helps to ensure that an entity’s mission is achieved in the most effective and efficient manner. Although the subject of internal control usually surfaces for discussion after improprieties or inefficiencies are found, good managers are always aware of and seek ways to help improve operations through effective internal control. As you requested, my testimony today will discuss the following questions: (1) What is internal control? (2) Why is it important? and (3) What happens when it breaks down? “The plan of organization and methods and procedures adopted by management to ensure that resource use is consistent with laws, regulations, and policies; that resources are safeguarded against waste, loss, and misuse; and that reliable data are obtained, maintained, and fairly disclosed in reports.” Internal control should not be looked upon as separate, specialized systems within an agency. Rather, internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations. Internal control is synonymous with management control in that the broad objectives of internal control cover all aspects of agency operations. Although ultimate responsibility for good internal control rests with management, all employees have a role in the effective operation of internal control that has been set by management. to name a few) that achieve the goal. All internal controls have objectives and techniques. In practice, internal control starts with defining entitywide objectives and then more specific objectives throughout the various levels in the entity. Techniques are then implemented to achieve the objectives. In its simplest form, internal control is practiced by citizens in the daily routine of everyday life. For example, when you leave your home and lock the door or when you lock your car at the mall or on a street, you are practicing a form of internal control. The objective is to protect your assets against undesired access, and your technique is to physically secure your assets by locks. In another routine, when you write a check, you record the check in the ledger or on your personal computer. The objective is to control the money in your checking account by knowing the balance. The technique is to document the check amount and the balance. Periodically, you compare the checking account transactions and balances you have recorded with the bank statement. Your objective is to ensure the accuracy of your records to avoid costly mistakes. Your technique is to perform the reconciliation. These same types of concepts form the basis for internal control in business operations and the operation of government. The nature of their operations is, of course, significantly larger and more complex, as is the inherent risk of ensuring that assets are safeguarded, laws and regulations are complied with, and data used for decision-making and reporting are reliable. Focusing a discussion on objectives and techniques, the acquisition, receipt, use, and disposal of property, such as computer equipment, can illustrate the practice of internal control in the operation of government activities. Internal control at the activity level such as procuring equipment should be preceded, at a higher organizational level, by policy and planning control objectives and control techniques that govern overall agency operations in achieving mission objectives. Examples of high-level control objectives that logically follow a pattern include the following: The mission of the agency should be set in accordance with laws, regulations, and administration and management policy. Agency components should be defined in accordance with the overall mission of the agency. Missions of the agency and components should be documented and communicated to agency personnel. Plans and budgets should be developed in accordance with the missions of the agency and its components. Policies and procedures should be defined and communicated to achieve the objectives defined in plans and budgets. Authorizations should be in accordance with policies and procedures. Systems of monitoring and reporting the results of agency activities should be defined. Transactions should be classified or coded to permit the preparation of reports to meet management’s needs and other reporting requirements. Access to assets should be permitted only in accordance with laws, regulations, and management’s policy. Examples of control techniques to help achieve the objectives include the following: agency and component mission statements approved by management and its legal counsel; training of personnel in mission and objectives; long and short-range plans developed related to budgets; monitoring of results against plans and budgets; policies and procedures defined and communicated to all levels of the organization and periodically reviewed and revised based on internal reviews; authorizations defined, controls set to ensure authorizations are made, and classifications of accounts set to permit the capture and reporting of data to prepare required reports; and physical restrictions on access to assets and records, and training in security provided to employees. The policy and planning control objectives and techniques provide a framework to conduct agency operations and to account for resources and results. Without that framework, administration and legislative goals may not be achieved; laws and regulations may be violated; operations may not be effective and efficient and may be misdirected; unauthorized activities may occur; inaccurate reports to management and others may occur; fraud, waste, and abuse is more likely to occur and be concealed; assets may be stolen or lost; and ultimately the agency is in danger of not achieving its mission. intended results. The procurement and management of computer equipment is an example of such a specific activity. Objectives and techniques should be established for each activity’s specific control. As examples of control objectives, vendors should be approved in accordance with laws, regulations, and management’s policy, as should the types, quantities, and approved purchase prices of computer equipment. As examples of related control techniques, criteria for approving vendors should be established and approved vendor master files should be controlled, and the purchase governed by criteria, such as obtaining competitive bids and setting specifications of the equipment to be procured. Likewise, control objectives should be set for the receiving process. For example, only equipment that meets contract or purchase order terms should be accepted, and equipment accepted should be accurately and promptly reported. Related control techniques include (1) detailed comparison of equipment received to a copy of the purchase order, (2) prenumbered controlled receiving documents that are accounted for, and (3) maintenance of receiving logs. Throughout the purchasing and receiving of equipment there needs to be appropriate separation of duties and interface with the accounting function to achieve funds control, timely payments, and inventorying and control of equipment received. Equipment received should be safeguarded to prevent unauthorized access and use. For example, in addition to physical security, equipment should be tagged with identification numbers and placed into inventory records. Equipment placed into service should only be issued to authorized users and records of the issuances should be maintained to achieve accountability. Further, physical inventories should be taken periodically and compared with inventory records. Differences in counts and records should be resolved in a timely manner and appropriate corrective actions taken. Also, equipment retired from use should be in accordance with management’s policies, including establishing appropriate safeguards to prevent unauthorized information that may be stored in the equipment from being disclosed. carelessness. Also, procedures whose effectiveness depends on segregation of duties can be circumvented by collusion. Similarly, management authorizations may be ineffective against errors or fraud perpetrated by management. In addition, the standard of reasonable assurance recognizes that the cost of internal control should not exceed the benefit derived. Reasonable assurance equates to a satisfactory level of confidence under given considerations of costs, benefits, and risks. The cost of fraud, waste, and abuse cannot always be measured in dollars and cents. Such improper activities erode public confidence in the government’s ability to efficiently and effectively manage its programs. Management at a number of federal government agencies are faced with tight budgets and fewer personnel. In such an environment, related operating factors, such as executive and middle management turnover and the diversity and complexity of government operations, can provide a fertile environment for internal control weakness and the resulting undesired consequences. It has been almost 50 years since the Congress formally recognized the importance of internal control. The Accounting and Auditing Act of 1950 required, among other things, that agency heads establish and maintain effective internal controls over all funds, property, and other assets for which an agency is responsible. However, the ensuing years up through the 1970s saw the government experience a crisis of poor controls. To help restore confidence in government and to improve operations, the Congress passed the Federal Managers’ Financial Integrity Act of 1982. The Integrity Act required, among other items, that we establish internal control standards that agencies are required to adhere to, the Office of Management and Budget (OMB) issue guidelines for agencies to follow in annually assessing their internal controls, agencies annually evaluate their internal controls and prepare a statement to the President and the Congress on whether their internal controls comply with the standards issued by GAO, and agency reports include material internal control weaknesses identified and plans for correcting the weaknesses. OMB has issued agency guidance that sets forth the requirements for establishing, periodically assessing, correcting, and reporting on controls required by the Integrity Act. Regarding the identification and reporting of deficiencies, OMB’s guidance states that “a deficiency should be reported if it is or should be of interest to the next level of management. Agency employees and managers generally report deficiencies to the next supervisory level, which allows the chain of command structure to determine the relative importance of each deficiency.” The guidance further states that “a deficiency that the agency head determines to be significant enough to be reported outside the agency (i.e., included in the annual Integrity Act report to the President and the Congress) shall be considered a ’material weakness.’” The guidance encourages reporting of deficiencies by recognizing that such reporting reflects positively on the agency’s commitment to recognizing and addressing management problems and, conversely, failing to report a known deficiency reflects adversely on the agency. separation of duties between authorizing, processing, recording, and qualified and continuous supervision to ensure that control objectives are achieved; and limiting access to resources and records to authorized persons to provide accountability for the custody and use of resources. Finally, the audit resolution standard requires managers to promptly evaluate findings, determine proper resolution, and establish corrective action or otherwise resolve audit findings. Attachment I provides a complete definition of the standards and Standards for Internal Controls in the Federal Government provides additional explanation of the standards. Financial Officers Act report whether each agency is maintaining financial management systems that comply substantially with federal financial management systems requirements, federal accounting standards, and the government’s standard general ledger at the transaction level. Our report, The Statutory Framework for Performance-Based Management and Accountability (GAO/AIMD-98-52, January 28, 1998) provides more detailed information on the purpose, requirements, and implementation status of these acts. In addition, that report refers to a number of other critically important statutes that address debt collection, credit reform, prompt pay, inspectors general, and information resources management. Although these acts address specific problem areas, sound internal controls are an essential factor in the success of these statutes. For example, the Results Act focuses on results through strategic and annual planning and performance reporting. Sound internal control is critical to effectively and efficiently achieving management’s plans and for obtaining accurate data to support performance measures. Weak internal controls pose a significant risk to government agencies. History has shown that serious neglect will result in losses to the government that can total millions, and even billions, of dollars over time. As previously mentioned, the loss of confidence in government that results can be equally serious. Although examples of poor internal controls could be drawn from many federal programs, three key areas illustrate the extent of the problems—health care, banking, and property. The Department of Human and Human Services Inspector General reported this past year that out of $163.6 billion in processed fee-for-service payments reported by the Health Care Financing Administration (HCFA) during fiscal year 1996—the latest year for which reliable numbers were available—an estimated $23.2 billion, or about 14.6 percent of the total payments, were improper. Consequently, the Inspector General recommended that HCFA implement internal controls designed to detect and prevent improper payments to correct four weaknesses where (1) insufficient or no documentation supporting claims existed, (2) medical necessity was not established, (3) incorrect classification (called coding) of information existed, and (4) unsubstantiated/unallowable services were paid. During the 1980s, the savings and loan industry experienced severe financial losses. Extremely high interest rates caused institutions to pay high costs for deposits and other funds while earning low yields on their long-term portfolios. Many institutions took inappropriate or risky approaches in attempting to increase their capital. These approaches included accounting methods to artificially inflate the institutions’ capital position and diversifying their investments into potentially more profitable, but riskier, activities. The profitability of many of these investments depended heavily on continued inflation in real estate values to make them economically viable. In many cases, weak internal controls at these institutions and noncompliance with laws and regulations increased the risk of these activities and contributed significantly to the ultimate failure of over 700 institutions. This crisis cost the taxpayers hundreds of billions of dollars. Making profitable loans is the heart of a successful savings and loan institution. Boards of directors and senior management did not actively monitor the loan award and administrative processes to ensure excessive risks in making loans were not taken. In fact, excessive risk-taking in making loans was encouraged, resulting in a lack of effective monitoring of loan performance that allowed poorly performing loans to continue to deteriorate. Also, loan documentation was a frequent problem that further evidenced weak internal supervision of loan officers and created difficulties in valuing and selling loans after the institutions failed. was not made available for reuse or effectively controlled against misuse or theft. More recently, we reported that breakdowns exist in the Department of Defense’s (DOD) ability to protect its assets from fraud, waste, and abuse. We disclosed that the Army did not have accurate records for its reported $30 billion in real property or the $8.5 billion reported as government furnished property in the hands of contractors. Further, we reported that pervasive weaknesses in DOD’s general computer controls place it at risk of improper modification; theft; inappropriate disclosure; and destruction of sensitive personnel, payroll, disbursement, or inventory information. Beginning in 1990, we began a special effort to review and report on the federal program areas our work had identified as high risk because of vulnerabilities to waste, fraud, abuse, and mismanagement. This effort brought a much-needed central focus on problems that were costing the government billions of dollars. Our most recent high-risk series issued focuses of six categories of high risk: (1) providing for accountability and cost-effective management of defense programs, (2) ensuring that all revenues are collected and accounted for, (3) obtaining an adequate return on multibillion dollar investments in information technology, (4) controlling fraud, waste, and abuse in benefit programs, (5) minimizing loan program losses, and (6) improving management of federal contracts at civilian agencies. See attachment II for a listing of the high-risk reports and our most recent reports and testimony on the Year 2000 computing crisis. In conclusion, effective internal controls are essential to achieving agency missions and the results intended by the Congress and the administration and as reasonably expected by the taxpayers. The lack of consistently effective internal controls across government has plagued the government for decades. Legislation has been enacted to provide a framework for performance-based management and accountability. Effective internal controls are an essential component of the success of that legislation. However, no system of internal control is perfect, and the controls may need to be revised as agency missions and service delivery change to meet new expectations. Management and employees should focus not necessarily on more controls, but on more effective controls. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. Internal control standards define the minimum level of quality acceptable for internal control systems to operate and constitute the criteria against which systems are to be evaluated. These internal control standards apply to all operations and administrative functions but are not intended to limit or interfere with duly granted authority related to the development of legislation, rule making, or other discretionary policy-making in an agency. 1. Reasonable Assurance: Internal control systems are to provide reasonable assurance that the objectives of the systems will be accomplished. 2. Supportive Attitude: Managers and employees are to maintain and demonstrate a positive and supportive attitude toward internal controls at all times. 3. Competent Personnel: Managers and employees are to have personal and professional integrity and are to maintain a level of competence that allows them to accomplish their assigned duties, and understand the importance of developing and implementing good internal controls. 4. Control Objectives: Internal control objectives are to be identified or developed for each agency activity and are to be logical, applicable, and reasonably complete. 5. Control Techniques: Internal control techniques are to be effective and efficient in accomplishing their internal control objectives. 1. Documentation: Internal control systems and all transactions and other significant events are to be clearly documented, and the documentation is to be readily available for examination. 2. Recording of Transactions and Events: Transactions and other significant events are to be promptly recorded and properly classified. 3. Execution of Transactions and Events: Transactions and other significant events are to be authorized and executed only by persons acting within the scope of their authority. 4. Separation of Duties: Key duties and responsibilities in authorizing, processing, recording, and reviewing transactions should be separated among individuals. 5. Supervision: Qualified and continuous supervision is to be provided to ensure that internal control objectives are achieved. 6. Access to and Accountability for Resources: Access to resources and records is to be limited to authorized individuals, and accountability for the custody and use of resources is to be assigned and maintained. Periodic comparison shall be made of the resources with the recorded accountability to determine whether the two agree. The frequency of the comparison shall be a function of the vulnerability of the asset. Prompt Resolution of Audit Findings: Managers are to (1) promptly evaluate findings and recommendations reported by auditors, (2) determine proper actions in response to audit findings and recommendations, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. High-Risk Series: An Overview (GAO/HR-97-1, February 1997). High-Risk Series: Quick Reference Guide (GAO/HR-97-2, February 1997). High-Risk Series: Defense Financial Management (GAO/HR-97-3, February 1997). High-Risk Series: Defense Contract Management (GAO/HR-97-4, February 1997). High-Risk Series: Defense Inventory Management (GAO/HR-97-5, February 1997). High-Risk Series: Defense Weapons Systems Acquisition (GAO/HR-97-6, February 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, February 1997). High-Risk Series: IRS Management (GAO/HR-97-8, February 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). High-Risk Series: Medicare (GAO/HR-97-10, February 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, February 1997). High-Risk Series: Department of Housing and Urban Development (GAO/HR-97-12, February 1997). High-Risk Series: Department of Energy Contract Management (GAO/HR-97-13, February 1997). High-Risk Series: Superfund Program Management (GAO/HR-97-14, February 1997). High-Risk Program Information on Selected High-Risk Areas (GAO/HR-97-30 May 1997). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/ AIMD-10-1.19, Exposure Draft, March 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/ T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/ AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/T-AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/T-AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/ AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Compliance (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Affairs Computer Systems: Risks of VBA’s Year 2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the subject of internal control, focusing on: (1) what internal control is; (2) its importance; and (3) what happens when it breaks down. GAO noted that: (1) internal control is concerned with stewardship and accountability of resources consumed while striving to accomplish an agency's mission with effective results; (2) although ultimate responsibility for internal controls rests with management, all employees have a role in the effective operation of internal controls established by management; (3) effective internal control provides reasonable, not absolute, assurance that an agency's activities are being accomplished in accordance with its control objectives; (4) internal control helps management achieve the mission of the agency and prevent or detect improper activities; (5) the cost of fraud cannot always be measured in dollars; (6) in 1982, Congress passed the Federal Managers' Financial Integrity Act requiring: (a) agencies to annually evaluate their internal controls; (b) GAO to issue internal controls standards; and (c) the Office of Management and Budget to issue guidelines for agencies to follow in assessing their internal controls; (7) more recently, Congress has enacted a number of statutes to provide a framework for performance-based management and accountability; (8) weak internal controls pose a significant risk to the government--losses in the millions, or even billions, of dollars can and do occur; (9) GAO and others have reported that weak internal controls over safeguarding and accounting for government property are a serious continuing problem; and (10) GAO's 1997 high-risk series identifies major areas of government operations where the risks of losses to the government is high and where achieving program goals is jeopardized.
The 911 emergency call system is intended to give individuals a simple, easy-to-remember, routinely available number that can be used to reach an appropriate public safety provider during any life-threatening situation. Using a landline, wireless, mobile telephone, or voice over internet protocol (VoIP) system, a caller dials 911 and the call is routed to a communications provider facility that automatically forwards the call to a public safety entity such as a PSAP. Next, the call taker/dispatcher talks to the caller to determine the nature of the emergency and to determine the necessary first responders, while working to send (or dispatch) the appropriate first responders to the location. According to the National Emergency Number Association, there are more than 6,000 PSAPs nationwide, at a county or city level, that answer more than 240 million 911 calls each year. Figure 1 illustrates the public safety communications and dispatch system, including how an emergency call is typically placed, received, and processed. As illustrated in figure 1, once a 911 caller places an emergency call, the communications provider receives and routes the call to the appropriate PSAP. The system used to route the call depends on the type of telephone used to make the 911 call. Specifically, for a call placed from a landline, a router in the provider’s central facility receives the 911 call and accesses the Automatic Number Identification database to associate the identifier with the phone number to determine the caller’s address. Then, based on the location information, the provider’s Master Street Address Guide database identifies the appropriate PSAP to receive the call. When a cell phone is used, the location information is typically provided to the PSAP through either cell tower triangulation technology or by Global Positioning System technology. When using VoIP, where calls are carried over digital subscriber lines, cable modems, or other Internet access methods, the caller needs to register the address of the VoIP device in advance. Current telecommunications and PSAP technology associate the voice and data transmission with the identifier and location databases and, based on the caller’s location, routes the call to the appropriate PSAP. When the caller’s phone number, address, and voice are routed to the appropriate PSAP, the call is automatically delivered with the phone number and location. The trained 911 call taker/dispatcher assists the caller and inputs information into additional IT systems and infrastructure to begin the emergency response. For example, the call taker/dispatcher may enter information into a computer-aided dispatch system. These systems automate the call-taking process, provide questions and responses for various scenarios, and send the first responders. Based on the information put into the system by the call taker/dispatcher, the computer-aided dispatch system interfaces with other systems for identification and address, ascertains the nature of the assistance needed, and transmits the information to the appropriate first responder. To provide assistance to the first responder, the call taker/dispatcher may also be able to use geospatial tools and systems that provide information on utility placement, government facility types and locations, property plats, mapping data, and aerial photographs. In addition, call takers may use criminal justice information databases, Internet access, automated vehicle locators to select the closest first responder, and radio and telecommunications services to share and receive information as the situation warrants. While a PSAP is to be available on a 24-hours, 365 day-a-year basis, an emergency operations center, as noted, is typically only activated during an environmental emergency or special event. It provides a single location for key decision makers from state, local, and federal agencies and multiple jurisdictions to gather and to react to events too complex or too large for regular offices or communications centers or single government agencies or jurisdictions to handle. From a single location, the officials can support on-scene incident commanders (such as fire, police, and emergency medical personnel), prioritize the allocation of resources, collaborate on strategy and tactics, and manage the fiscal and social consequences of an incident. Emergency operations centers require the same variety of information and communications technology used by PSAPs to fulfill their mission: Internet access, telecommunications services, geospatial tools, and radio systems. In addition, these centers have access to and use public alerting and warning systems to disperse information to citizens, such as sending messages to registered devices (mobile telephones, pagers, electronic mail, etc.), sirens, public address systems, and, in some cases, reverse 911 or similar products that can send warnings to entire communities when the need arises. Public safety entities are undergoing the process of implementing the next generation of 911 services (known as NG 911) to, among other things, improve their capabilities to communicate with callers, increase resiliency of their 911 operations, and enhance information sharing among first responders. NG 911 is expected to use Internet protocol- based, broadband technology that is capable of carrying voice plus large amounts of varying types of data, such as instant messaging, wireline calls, VoIP calls, photographs, live video feeds from an emergency scene, and “telematics” (such as advanced automatic crash notification data collected from the vehicle’s computer system). Some states have implemented NG 911 functionality in selected PSAPs in order to ascertain technology requirements and cybersecurity implications with the intention of using multiple releases to eventually move to full NG 911 capability. For example, in 2011, California began conducting multiple pilots to evaluate different technology platforms for its NG 911 operations such as hosted or cloud-based technology. In 2012, Vermont completed a 6- month pilot accepting text messages in lieu of voice 911 calls from a wireless carrier. Vermont has also implemented a statewide 911 system that transmits 911 calls to PSAPs using VoIP for its emergency services network. In addition, the Middle Class Tax Relief and Job Creation Act of 2012 required the National Telecommunications and Information Administration (NTIA) and Transportation’s National Highway Traffic Safety Administration to create a program to improve emergency communications throughout the country by facilitating coordination and communication among federal, state, and local emergency communications systems, emergency personnel, public safety organizations, telecommunications providers, and telecommunications equipment manufacturers and vendors involved in the implementation of 911 services. The act established FirstNet as an independent authority within the NTIA to develop a single nationwide, interoperable public safety broadband network. The FirstNet network is intended to give users functionality beyond current radio communications such as access to video images of a crime in progress, downloaded floor plans of a burning building, and rapid connection with first responders from other communities. The network is expected to be IP-based and interface with commercial networks to transmit voice, text, photographic, video, and other digital data between PSAPs and first responders using interoperable mobile devices and leveraging NG 911 technology. As of October 2013, the FirstNet network requirements had not been developed; however, the Middle Class Tax Relief and Job Creation Act of 2012 required functionality to include public Internet connectivity over commercial wireless networks or the public switched telephone network. The act also required that FirstNet’s network development ensure the safety, security, and resiliency of the network, including requirements for protecting and monitoring the network to defend against cyberattack. The act does not specify time frames for completing implementation of the network. Like threats affecting other critical infrastructures, threats to the public safety IT infrastructure can come from a wide array of sources. For example, advanced persistent threats—where adversaries possess sophisticated levels of expertise and significant resources to pursue their objectives—pose increasing risk. Other sources include corrupt employees, criminal groups, hackers, and terrorists. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary or political gain or mischief, among other things. Table 1 describes the sources of cyber-based threats in more detail. These sources of cyber threats may make use of various cyber techniques, or exploits, to adversely affect communications networks, and could negatively impact Internet-protocol based NG 911 and FirstNet networks. Types of exploits include denial-of-service attacks, phishing, passive wiretapping, Trojan horses, viruses, worms, and attacks on the IT supply chains that support the communications networks. Table 2 describes the types of exploits in more detail. In addition to cyber-based threats, the nation’s public safety entities also face threats from physical sources. Examples of these threats include natural events (e.g., hurricanes or flooding) and man-made disasters (e.g., terrorist attacks), as well as unintentional man-made outages (e.g., a backhoe cutting a communication line). For example, after a major storm in June 2012, several cities and counties in Virginia experienced a total outage of telephone service supporting 911 that continued for 5 days. The loss of commercial power and the subsequent failure of one of the two backup generators in the common carrier’s facilities were the predominant causes of the service outage. In addition, the lack of physical diversity in the telephone circuits supporting 911 and the failure to monitor the telephone circuits contributed to the disruption of PSAP operations during the outage. While not related to public safety entities’ internal IT, these organizations, specifically PSAPs, have been the target of attacks and pranks. For example, in March 2013, the National Emergency Number Association reported that more than 200 telephony-based attacks had been identified. The attacks were part of an extortion scheme demanding payment for an outstanding debt to be paid to an individual or organization. When the request was not paid, the perpetrator launched an attack that inundated the PSAP’s administrative, nonemergency lines with a continuous stream of calls for a lengthy period of time. In addition, PSAPs have received false emergency calls from pranksters who use Internet protocol-based telephone technology to camouflage the source of the call. In these cases, a caller reported a serious incident in progress, such as an armed robbery or a home invasion, and a false address or location information make it appear that the call is coming from a different address. For example, in April 2013, news media reported that multiple celebrities’ homes were swarmed by police after fake 911 calls were made reporting a crime in progress involving people armed with guns or bombs. The incidents occupied public safety resources that otherwise could have been available to receive and respond to actual emergency calls. Although state and local governments are responsible for the operation and cybersecurity of their public safety entities, federal law, policy, and plans specify roles and responsibilities for the Departments of Homeland Security, Commerce, Transportation, and Justice and the Federal Communications Commission to support state and local governments’ cybersecurity efforts. These agencies are responsible for performing one or more of the following cybersecurity-related coordination roles and responsibilities: (1) supporting critical infrastructure protection-related planning, (2) issuing grants, (3) sharing information, (4) providing technical assistance, and (5) regulating and overseeing essential functions. DHS is responsible for leading, integrating, and coordinating the implementation of efforts to protect the nation’s cyber-reliant critical infrastructures. The Homeland Security Act of 2002 created DHS and, among other things, assigned it the following critical infrastructure protection responsibilities: (1) developing a comprehensive national plan for securing the critical infrastructures of the United States, (2) recommending measures to protect those critical infrastructures in coordination with other groups, and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of, or response to, terrorist attacks. In addition, under the act, DHS is required to provide to state and local government entities analysis and warnings related to threats and vulnerabilities to their critical information systems, crisis management support in response to threats or attacks, and technical assistance with emergency recovery plans for critical information systems. In 2003, Homeland Security Presidential Directive 7 (HSPD-7) established DHS as the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect cyber-critical infrastructures and key resources. In addition, HSPD-7 identified lead federal agencies, referred to as sector-specific agencies, that are responsible for coordinating critical infrastructure protection efforts with the public and private stakeholders in their respective sectors. In 2009, in accordance with the Homeland Security Act, DHS issued the National Infrastructure Protection Plan (NIPP). The plan sets forth a risk management framework and details the roles and responsibilities of DHS in protecting the nation’s critical infrastructures; identifies agencies with lead responsibility for coordinating with the sectors (or sector-specific agencies); and specifies how other federal, state, regional, local, tribal, territorial, and private-sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. As the sector-specific agency for the emergency services sector, DHS is to coordinate protective programs and resilience strategies for the sector. The emergency services sector is comprised of assets, systems, and networks supporting law enforcement, fire and emergency services, emergency management, emergency medical services, and public works functions at the state, local, tribal, and territorial levels of government. Based on NIPP, DHS is tasked with, among other things, updating the sector-specific plans, coordinating sector training, and maintaining information-sharing mechanisms. DHS is to collaborate with public- and private-sector stakeholders through government and sector coordinating councils to develop and implement the sector-specific plan in order to identify and protect critical infrastructure assets. In addition, DHS is responsible for the state, local, tribal, and territorial cybersecurity engagement program, which was established to build partnerships with non-federal public stakeholders including governors, mayors, state homeland security advisors, chief information officers, and chief information security officers, in order to advance the department’s mission in protecting critical network systems and ensuring the use of the Internet as a resource to connect with citizens. In February 2013, Presidential Policy Directive 21 directed the Secretary of Homeland Security to update NIPP by October 2013. However, it states that all plans remain in effect until specifically revoked or superseded. It also revoked HSPD-7 but continued to identify DHS as the sector-specific agency for the emergency services sector. The directive also identified sector-specific agency roles and responsibilities for their respective sectors to include (1) coordinating with federal agencies and collaborating with state, local, territorial, and tribal entities; (2) serving as a day-to-day federal interface for the prioritization and coordination of activities; (3) carrying out, consistent with law and policy, incident management; and (4) supporting sector identification of vulnerabilities. DHS released an updated NIPP in December 2013. Federal law and policy also establish a role for Commerce in protecting the nation’s communications networks. For example, the Telecommunications Authorization Act of 1992 established Commerce’s National Telecommunications and Information Administration (NTIA) as the principal presidential adviser on telecommunications and information policies. NTIA activities include administering grant programs that further the deployment and use of broadband and other technologies, and developing policy on issues related to the Internet economy, including cybersecurity. In addition, as discussed previously, the Middle Class Tax Relief and Job Creation Act of 2012 required NTIA and Transportation’s National Highway Traffic Safety Administration to create a program to improve emergency communications throughout the country. The act established FirstNet as an independent authority within the NTIA to develop a single nationwide, interoperable public safety broadband network. FirstNet’s responsibilities include leading the development actions including obtaining grants and funds from and making contracts with, among others, private companies and federal, state, regional, and local agencies. In addition, FirstNet is also to ensure the safety, security, and resiliency of the FirstNet network, including requirements for protecting and monitoring the network to protect against cyber attack. Based on provisions in the ENHANCE 911 Act of 2004 and Middle Class Tax Relief and Job Creation Act of 2012, Transportation, through the National Highway Traffic Safety Administration, coordinates 911 services at the federal, state, and local levels. Specifically, Transportation operates a program to facilitate coordination and communication between federal, state, and local emergency communications systems, emergency personnel, public safety organizations, telecommunications carriers, and telecommunications equipment manufacturers and vendors involved in the implementation of 911 services. Coordination activities include resources and technical assistance provided to state and local 911 authorities, such as grants supporting upgrades to PSAP equipment and operations and education about implementing new 911 technologies. The Federal Bureau of Investigation (FBI), under Justice, leads the nation’s efforts in investigating cyber-based crimes including computer intrusions and major cyber fraud. The FBI shares cyber-related information with state and local governments that could be law enforcement sensitive or classified. In particular, the FBI’s National White Collar Crime Center in partnership with the Internet Crime Complaint Center is to receive Internet-related criminal complaints; research, develop, and refer the criminal complaints to federal, state, local, and international law enforcement; and issue alerts to affected entities. The Federal Communications Commission (FCC) regulates interstate and international communications by radio, television, wire, satellite, and cable throughout the United States. Agency officials stated that FCC is to promote the reliability, resiliency, and availability of the nation’s communications networks at all times, including in times of emergency or natural disaster. Further, it has the authority to adopt, administer, and enforce rules related to cybersecurity, communications reliability, and 911 and emergency alerting. Its regulations include requirements for certain communications providers to report on the reliability and security of communications infrastructures. These include requirements for reporting service disruptions and outages. For example, communications providers are required to report service outages and related issues that meet specific thresholds that affect public safety communications and emergency response. Also, the FCC engages in public-private partnerships through federal advisory committees such as its Communications, Security, Reliability, and Interoperability Council. The Council develops and provides recommendations to the FCC regarding best practices and actions that can be taken to ensure optimal security, reliability, and interoperability of commercial and public safety communications systems. Among other efforts, the Council’s working group is responsible for assessing and making recommendations concerning technical standards, related technical gaps, and overall readiness of the legacy 911 system for accepting information generated by NG 911 applications. Working group members include representatives from federal, state, and local governments, the telecommunications industry, and industry associations. In addition, the FCC is required by the Middle Class Tax Relief and Job Creation Act of 2012 to reallocate spectrum for use by public safety entities and to grant a license for that spectrum to FirstNet. Under the act, the FCC is required to establish an advisory board to develop recommended technical requirements to ensure a nationwide level of interoperability for the network. The board is to be known as the “Technical Advisory Board for First Responder Interoperability.” FCC is also required, in coordination with DHS and the National Highway Transportation Safety Administration, to make recommendations to Congress regarding the legal and statutory framework for NG 911 services to include security standards. Presidential Policy Directive 21 requires that the FCC is to exercise its authority and expertise to partner with DHS, as well as other federal departments and agencies, to: (1) identify and prioritize communications infrastructure; (2) identify communications sector vulnerabilities and work with industry and other stakeholders to address those vulnerabilities; and (3) work with stakeholders, including industry, and engage foreign governments and international organizations to increase the security and resilience of critical infrastructure within the communications sector and facilitate the development and implementation of best practices promoting the security and resilience of critical communications infrastructure. The five identified federal agencies have, to varying degrees, coordinated cybersecurity-related activities with state and local governments. Agencies’ activities include (1) supporting critical infrastructure protection- related planning, (2) issuing grants, (3) sharing information, (4) providing technical assistance, and (5) regulating and overseeing essential functions. However, except for supporting critical infrastructure planning, federal activities were generally not targeted towards or focused on public safety entities. For example, DHS collaborated with state and local governments through the Sector Coordinating Council to complete critical infrastructure planning efforts. Regarding grants to enhance emergency services, sharing cybersecurity-related information, providing technical assistance, and regulating and overseeing essential functions, federal agencies’ coordination activities with state and local governments were generally not targeted to public safety entities’ cybersecurity. However, federal agencies performed some coordination-related activities directed to public safety entities, including issuing alerts about cyber-based attacks to public safety entities, performing risk assessments, providing technical assistance through education and awareness efforts, and administering grants that allowed for expenditures for IT equipment and cybersecurity tools. In accordance with NIPP, DHS coordinated with state and local governments through the Emergency Service Sector Coordinating Council to develop a draft plan to address the protection of emergency services sector critical infrastructure and key resources. During the process, DHS solicited and obtained input from federal and nonfederal stakeholders through the established government and sector coordinating councils. For example, FCC officials from the Public Safety and Homeland Security Bureau stated that they had coordinated with DHS and other federal entities in the development of the plan. In 2010, DHS issued the Emergency Services Sector-Specific Plan, which addresses, among other things, the cybersecurity of public safety entities such as PSAPs, emergency operations centers, and first responder agencies. The Emergency Services Sector Coordinating Council acknowledged within the plan that they had provided input during the development process and would work with the various partners to support implementation of the plan. The Emergency Services Sector-Specific Plan identifies activities that the sector can take to mitigate the overall risk to key assets, systems, networks, or functions, and mitigate vulnerabilities or minimize the consequences associated with a terrorist attack or other incident. The Emergency Services Sector-Specific Plan lists protective programs and resilience strategies for the human, physical, and cyber-critical infrastructure supporting the sector that are available to members of the emergency services sector to assist them in protecting their critical assets. The cyber-related protective programs include homeland security grants, the cross-sector cybersecurity working group, and cyber exercises. The plan is intended to serve as a guide for the sector, including the public safety entities, to set protective program goals and objectives, identify assets, assess risks, prioritize infrastructure components and programs to enhance risk mitigation, implement protective programs, measure program effectiveness, and incorporate research and development of technology initiatives into sector planning efforts. For example, it states that the sector must be able to determine the hardware and software components critical to supporting the sector’s mission, including the computers, databases, and other IT assets. Further, DHS, through the plan, recognized the risk of cyber attack on PSAP systems, such as attacks on computer-aided dispatch systems and how such attacks would seriously impede the sector’s ability to react and respond swiftly to incidents. In addition, the NIPP and the Emergency Services Sector-Specific Plan identified the need to assess risks to the sector. In 2012, DHS, based on a collaborative effort with state and local entities and the private sector, issued the Emergency Services Sector Cyber Risk Assessment, which documents DHS and sector subject matter experts’ evaluation of the threats, vulnerabilities, and consequences to the sector’s cyber infrastructure. The assessment identified intentional and unintentional threats including cyber-related threats that could disrupt or degrade a PSAP’s 911 service capabilities. For example, a cyber-related threat could target a computer-aided dispatch system or geospatial database, thus compromising the availability of geographical information and other technical support and reducing the effectiveness of the emergency response. The risk assessment also stated that vulnerabilities to the common carriers’ address and location databases are the responsibility of the common carriers and are not within the control of a PSAP. It further stated that the next step is to determine how identified risks should be addressed and will require continued public- and private-sector collaboration. Also, according to the risk assessment, the sector is to develop a strategy for mitigating risks throughout the sector. According to DHS officials responsible for emergency services sector activities, the strategy is scheduled for completion and approval by the end of the second quarter of fiscal year 2014. While DHS and the Emergency Services Sector Coordinating Council addressed in the 2010 Emergency Services Sector-Specific Plan aspects of cybersecurity of the current environment, they did not address the development and implementation of NG 911 and the FirstNet network in public safety entities. According to the NIPP, sector-specific plans are to identify activities to mitigate overall risk to the key assets, systems, networks, or functions, and mitigate vulnerabilities or minimize the consequences associated with a terrorist attack or other incident. As the sector-specific agency, DHS is tasked with developing and updating the sector-specific plans in coordination with public and private sector stakeholders through government and sector coordinating councils. However, DHS and the coordinating councils had not yet incorporated cybersecurity protections for NG 911 and the FirstNet network into the sector plan in part because the revision cycle had not occurred and FirstNet was not established until 2012. According to DHS officials, the process for updating the sector-specific plans will begin after the revised NIPP has been released. A revised NIPP was released in December 2013, and, according to DHS, a new sector-specific plan is estimated to be completed in December 2014. Until DHS, in collaboration with stakeholders, develops the next iteration of the sector-specific plan, it is unclear if the cybersecurity implications of implementing these technologies will be considered. Comprehensive planning based on effective coordination between federal and non-federal emergency services sector stakeholders could better position the sector to identify and mitigate the increased cyber-based risks of the NG 911 and FirstNet network technologies. Without such planning, information systems are at an increased risk of failure or being unavailable at critical moments. Federal grant programs have been used to fund technology enhancements so that public safety entities could address the evolution in communications technology and for technology enhancements at state and local public safety entities to include allocations for cybersecurity enhancements at the grantee’s option. The National Highway Traffic Safety Administration and the NTIA allocated $43.5 million in grants to states over a 3-year period, starting in September 2009, to help implement enhancements to 911 system functionality to address the increase in 911 calls from cell phones and the future plans for PSAPs to handle text and other message formats. Eligible expenses under the grant requirements fell into four categories: administrative expenses, training, consulting, and hardware and software. The grant period concluded at the end of fiscal year 2012. In all, the National Highway Traffic Safety Administration and NTIA awarded grants ranging from $200,000 to $5.4 million to 30 states and territories to help implement 911 system enhancements. While cybersecurity was not specified as a requirement in a grant program’s eligible use of funds, it was not precluded from the allowed use of the funds. In March 2013, the National Highway Traffic Safety Administration and NTIA reported that state governments used the majority of the funds to procure hardware and software to develop the IP-based infrastructure in preparation for their eventual migration to the NG 911 environment. DHS’s Federal Emergency Management Agency (FEMA) offered preparedness program funds to state and local governments in order to enhance their emergency response capabilities. Although FEMA does not have a specific grant program for cyber-related purchases, cybersecurity and IT equipment (i.e., personal and network firewalls, authentication devices, and intrusion detection systems) are among the allowable equipment listed under these grants. The grants also allow for purchases of PSAP-related IT, such as computer-aided dispatch systems, global positioning systems, and automatic vehicle locating systems. According to FEMA officials, the grant money is typically distributed to state governments that, in turn, allocate the funds to local governments for their public safety entities and for other local government operations. Justice, through its Office of Justice Programs, provided grants to local governments to support cyber forensics, cyber crime investigations, and related training. Justice reported that fiscal year 2012 grants funded computer equipment purchases, training for law enforcement personnel, and cyber crime awareness and prevention programs. However, based on our analysis of cyber-related grant information provided by Justice officials, the grants were not used for cybersecurity within the local public safety organizations. Local governments used the grants to enhance their capabilities to perform cyber forensics and cyber crime investigations. At the time of our review, NTIA officials involved in the FirstNet network’s implementation stated that NTIA had distributed grants totaling $122 million to state and local governments for planning and conducting studies to determine the infrastructure, equipment, and architecture requirements for FirstNet’s network development. FCC officials did not identify grant programs that directly or indirectly target improving the security of the networks and computer systems at state and local public safety entities. DHS shared cybersecurity-related information such as threats and hazards with state and local governments through various entities. While the information was not uniquely targeted to public safety entities, it may be of benefit to them. Specifically, DHS collected, analyzed, and disseminated cyber threat and cybersecurity-related information to state and local governments through its National Cybersecurity and Communications Integration Center and through its relationship with the Multi-State Information Sharing and Analysis Center. DHS’s State, Local, Tribal, and Territorial Engagement Office’s Security Clearance Initiative facilitates the granting of security clearances to state chief information officers and chief information security officers. The clearances allow these personnel to receive information about current and recent cyber attacks and threats. For example, according to DHS officials, they have issued secret clearances to 48 percent of state chief information officers and 84 percent of state chief information security officers. DHS provides intelligence information to fusion centers, which then share the information on possible terrorism and other threats and issue alerts to state and local governments. For example, in March 2013, a fusion center issued a situational awareness bulletin specific to public safety entities. The alert was about possible telephony denial-of-service attacks targeting PSAPs’ administrative (non-911) telephone lines. The FBI’s Internet Crime Complaint Center has also provided alerts to PSAPs. For example, in April 2013, the FBI’s Internet Crime Complaint Center warned PSAPs about telephony denial-of-service attacks targeting them and advised victims to report incidents to law enforcement. The advisory noted that dozens of such attacks had targeted the administrative PSAP lines and that the attacks were part of an extortion scheme demanding payment for an outstanding debt to be paid to an individual or organization. The perpetrator launched an attack that inundated the PSAP with a continuous stream of calls for a lengthy period of time. DHS, Transportation, Commerce, and the FCC had coordinated with state and local governments to provide technical assistance including cybersecurity awareness training on cybersecurity threats and available resources, guidance to strengthen their cybersecurity posture, and cyber exercises and cybersecurity assessments to help them identify cyber vulnerabilities. The technical assistance was provided to public safety entities in a few instances, but was generally not targeted to them. DHS’s state and local government-focused activities included: Performing outreach to state governors, chief information officers, and chief information security officers to build awareness of cybersecurity threats and DHS technical resources available to them. Since 2010, conducting 114 cyber resilience reviews, including at least 1 that was focused on a local government’s 911 and emergency management cyber operations, in order to enhance the cybersecurity posture of state and local government partners. These reviews were free, voluntary, and covered the entities’ cybersecurity practices regarding the management of assets, controls, incidents, service continuity, and risk. Leading 33 cyber-related exercises since 2006 with state, local, and territorial government partners to test and evaluate plans and policies to handle cyber incidents. Providing financial support to the Multi-State Information Sharing and Analysis Center (e.g., $6.7 million in 2012), whose members represent the 50 states, 4 U.S. territories, 4 tribal nations, and hundreds of municipalities. Its security operations center provides intrusion prevention support services for state and local government systems by actively monitoring their networks. Currently the monitoring covers 22 states, 7 local governments, and 1 territory. Developing and administering, in coordination with the Multi-State Information Sharing and Analysis Center and the National Association of State Chief Information Officers, a national cybersecurity questionnaire. It was distributed to state and local governments to identify weaknesses and strengths in state and local governments’ cybersecurity processes. In March 2012, DHS and the Multi-State Information Sharing and Analysis Center jointly issued the survey report that identified key challenges faced by state and local governments, including a low overall awareness of risks to their systems, a lack of information security and disaster recovery plans, and a less mature cybersecurity capability among local governments. According to DHS officials, DHS has partnered with the Multi-State Information Sharing and Analysis Center to complete a second iteration of the survey and plans to report the results in March 2014. Performing outreach activities, including presentations at professional conferences on cybersecurity and related available federal resources, with organizations such as the National Emergency Management Association and the International Association of Chiefs of Police that represent public safety professionals. Providing, through FEMA, technical assistance to 12 state-level emergency operations centers in 2010 through 2012 that was not cybersecurity specific but could benefit public safety entities’ IT infrastructure. For example, FEMA assisted states with their information sharing and coordination capability and with emergency operations center design and management functions. FCC, Transportation, and Commerce have also provided technical assistance to state and local governments that was not targeted to the cybersecurity of public safety entities, but could benefit their operations. FCC provided technical assistance via its website to state and local governments by issuing guidelines for planning for the continuity of their PSAP operations and managing the security and operability of the PSAP communications systems and networks during emergencies. PSAPs may choose to implement the FCC guidelines, which are voluntary, to further develop, enhance, and expand their current emergency and disaster preparedness, response and recovery plans, and strategic approach to their overall emergency communications plans. According to FCC officials responsible for making the guidance available, FCC does not track the use of the guidance by PSAPs. In addition, according to local county public safety officials from one county, FCC provided technical assistance to resolve communications problems that arose due to issues with radio frequencies and signals. FCC worked with state and local governments to incorporate federal access control standards into FirstNet development efforts. Transportation and Commerce also provided technical assistance activities that, while not cybersecurity related, were intended to help enhance the technology infrastructure for PSAPs and to improve coordination and communications among federal, state, and local emergency communications systems, and others involved in the implementation of enhancements to 911 services. Specifically, the National Highway Traffic Safety Administration and NTIA jointly offer educational services and technical and operational information to state and local governments on implementing new technology such as IP-enabled 911 services. FCC’s regulatory oversight of the reliability and availability of telecommunications services does not directly impact state and local public safety entities. Public safety entities may benefit from FCC actions because the telecommunication providers’ services are essential to the public safety entities’ ability to receive emergency calls and dispatch first responders to the correct location. For example, the FCC requires reporting on telecommunications service outages, and its outage reporting guidelines include requirements to report extended service interruptions (exceeding 30 minutes) potentially affecting 911 call centers. In addition, in June 2013, FCC established an e-mail address for PSAPs to voluntarily report communications provider outages they experience directly to the FCC. Further, in December 2013, FCC released an order adopting new rules requiring providers to take reasonable measures to provide reliable 911 service with respect to circuit diversity, central office backup power, and diverse network monitoring. Identified federal agencies coordinate to varying degrees with state and local governments about the cybersecurity of IT relied on to receive and respond to 911 communications by PSAPs, first responder agencies, and emergency operations centers. Federal cybersecurity efforts may indirectly benefit public safety entities, but their efforts are generally not targeted to them. While DHS collaborated with state and local governments for critical infrastructure planning for the emergency services sector, the current plan does not incorporate cybersecurity protections for NG 911 and the FirstNet network. Until DHS, in collaboration with stakeholders, develops the next iteration of the sector- specific plan, it is unclear if the cybersecurity implications of implementing NG 911 and the FirstNet network will be considered. As these new technologies are adopted to enhance the capabilities of public safety entities, cyber risks will increase. Thus, effective federal cybersecurity coordination including critical infrastructure protection planning with state and local governments concerning their public safety entities could better position the sector to identify and mitigate these risks. We recommend that the Secretary of Homeland Security, in collaboration with emergency service sector stakeholders, address the cybersecurity implications of implementing NG 911 and the FirstNet network in the next iteration of sector plans. We provided a draft of this report to the Departments of Homeland Security, Commerce, Justice, and Transportation, and the Federal Communications Commission for their review and comment. DHS provided written comments on our report (see app. II), signed by DHS’s Director of Departmental GAO-OIG Liaison Office. In its comments, DHS concurred with our recommendation. In addition, DHS stated that the revised NIPP was released in December 2013, and it will work with sector partners to develop an updated Emergency Services Sector-Specific Plan that will include consideration of both NG 911 and the FirstNet network. DHS estimated completing the updated sector plan by December 31, 2014. FCC also provided written comments on a draft of our report (see app. III), signed by the Chief, Public Safety and Homeland Security Bureau. FCC stated that without coordination on public safety cybersecurity matters among federal, state, and local governments, the problems outlined in this report will not be properly addressed. Further, FCC agreed that the current Emergency Service Sector Specific Plan does not provide the detail necessary to address the threat. Audit liaisons from DHS, FCC, and Justice also provided technical comments via e-mail. We incorporated these comments where appropriate. . We are sending copies of this report to interested congressional committees; the Secretaries of the Departments of Commerce, Homeland Security, and Transportation; the Attorney General of the United States; the Chairman of the Federal Communications Commission; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Our objective was to determine the extent to which federal agencies coordinated with state and local governments regarding cybersecurity efforts at emergency operations centers, public safety answering points, and first responder agencies involved in handling emergency calls. The scope of our audit focused on identified federal agencies that have a role and responsibilities for coordinating cybersecurity efforts with state and local governments. We also included state and local governments and related public safety entities and key industry associations that are involved in handling emergency calls or work closely with or represent those in the emergency communications industry. To identify the roles of federal agencies and select the organizations responsible for coordinating cybersecurity efforts with state and local government for public safety entities, we reviewed relevant federal law, policy, regulation, and critical infrastructure protection-related strategies, including the following: Homeland Security Act of 2002; Middle Class Tax Relief and Job Creation Act of 2012; Implementing Recommendations of the 9/11 Commission Act of 2007; 2009 National Infrastructure Protection Plan; 2010 Emergency Services Sector-Specific Plan; 2012 Emergency Services Sector Cyber Risk Assessment; 2003 National Strategy to Secure Cyberspace; Department of Homeland Security’s Information Sharing Strategy, Presidential Policy Directive 21—Critical Infrastructure Security and Resilience, February 12, 2013; Executive Order 13618—Assignment of National Security and Emergency Preparedness Communications Functions, July 6, 2012; Executive Order 13636—Improving Critical Infrastructure Cybersecurity, February 19, 2013; and Title 47, Code of Federal Regulations sections 4.5, 4.9, 12.3, and Part 400. We analyzed these documents to identify federal agencies responsible for coordinating with state and local governments regarding cybersecurity- related activities, including partnering with state and local government emergency services organizations to fulfill planning and assessment efforts, providing technical assistance, and sharing relevant information about threat, vulnerabilities, and mitigation techniques. In addition, we analyzed these documents to determine other methods that could support the cybersecurity of emergency services to include administering grants related to improving 911 services or regulating essential functions such as communications. Based on our analysis, we determined that the Departments of Homeland Security, Commerce, Justice, and Transportation and the Federal Communications Commission were key federal entities relevant to our objective and identified five key activities related to cybersecurity coordination, to evaluate the federal entities against: (1) supporting critical infrastructure protection-related planning, (2) issuing grants, (3) sharing information, (4) providing technical assistance, and (5) regulating and overseeing essential functions. To determine the identified federal entities’ cybersecurity coordination activities related to these activities, we collected and analyzed relevant plans and reports dated from 2009 to 2013. For example, we analyzed DHS’s State, Local, Tribal and Territorial Cybersecurity Engagement Program efforts to build partnerships with their non-federal partners to advance DHS’s mission in protecting critical network systems. To get a better understanding of grants and how they are issued, we analyzed the Federal Emergency Management Agency’s guidance on public safety grant funds, Transportation’s administration of the E911 grant program, and the Justice’s reports on Office of Justice Program grantees. To determine the responsibilities of various agencies in regulating and overseeing functions, we analyzed various laws to determine DHS responsibilities to state and local entities and Federal Communications Commission’s outage reporting requirements. In addition, we interviewed officials from Department of Homeland Security’s Office of Cybersecurity and Communications, Office of Infrastructure Protection, and the Federal Emergency Management Agency; Commerce’s National Telecommunications and Information Administration, and First Responder Network Authority; Justice’s Federal Bureau of Investigation, Justice Management Division, and Community Oriented Policing Services; Transportation’s National Highway Traffic Safety Administration; and the Federal Communications Commission’s Public Safety and Homeland Security Bureau. To confirm federal efforts and gain an understanding regarding how public safety entities operate, we analyzed relevant policies, plans, and reports such as the National Emergency Number Association’s Primer on the 911 Call Process, Recommended Best Practices Checklist Against TDoS Attacks, and Emergency Number Professional’s Reference Manual; the California 911 Emergency Communications Office’s explanation of E911 Call Flow; the Metropolitan Washington Councils of Governments Final Report on 911 Service Gaps During and Following the Derecho Storm on June 29, 2012; the Federal Communications Commission’s report on Impact of the June 2012 Derecho on Communications Networks and Services, and How 911 Works by Julia Layton. In addition, we interviewed officials familiar with emergency operations and/or cybersecurity aspects of state and local governments from the National Association of State Chief Information Officers; National Emergency Number Association; National Emergency Managers Association; National Association of State 911 Administrators; Multi-State Information Sharing and Analysis Center; International Association of Fire Chiefs; and the National Governors Association. We also interviewed state and local government officials familiar with emergency communication operations based on proximity of location, leadership in national associations, and/or involvement in ongoing technology enhancements. For example, we interviewed public safety officials from the Alabama 911 Board; Arlington County, Virginia; California Public Safety Communications Office; Fairfax County, Virginia; Orange County, California; Overland Park, Kansas; and Wake County, North Carolina. In addition, to get a better understanding of cybersecurity and the activities performed at public safety answering points and emergency operations centers and their interaction with first responders, we reviewed and analyzed the operations and responsibilities of the California Technology Agency Public Safety Communications Office, and observed the operations of the McConnell Public Safety and Transportation Operations Center in Fairfax County, Virginia, and the Arlington County Emergency Communications Center in Arlington, Virginia. We determined that information provided by the federal, state, and local agencies, such as plans, guideline, and manuals, was sufficiently reliable for the purposes of our review. To arrive at this assessment, we corroborated the information by comparing it with statements from relevant agency officials. We conducted this performance audit from November 2012 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. GAO staff who made significant contributions to this report include Michael W. Gilmore, Assistant Director; Nancy Glover; Barbarol James; Kenneth A. Johnson; David Plocher; and Adam Vodraska.
Individuals can contact fire, medical, and police first responders in an emergency by dialing 911. To provide effective emergency services, public safety entities such as 911 call centers use technology including databases that identifies phone number and location data of callers. Because these critical systems are becoming more interconnected, they are also increasingly susceptible to cyber-based threats that accompany the use of Internet-based services. This, in turn, could impact the availability of 911 services. GAO was asked to review federal coordination with state and local governments regarding cybersecurity at public safety entities. The objective was to determine the extent to which federal agencies coordinated with state and local governments regarding cybersecurity efforts at emergency operations centers, public safety answering points, and first responder organizations involved in handling 911 emergency calls. To do so, GAO analyzed relevant plans and reports and interviewed officials at (1) five agencies that were identified based on their roles and responsibilities established in federal law, policy, and plans and (2) selected industry associations and state and local governments. The five identified federal agencies (Departments of Homeland Security, Commerce, Justice, and Transportation and Federal Communications Commission (FCC)) have to varying degrees, coordinated cybersecurity-related activities with state and local governments. These activities included (1) supporting critical infrastructure protection-related planning, (2) issuing grants, (3) sharing information, (4) providing technical assistance, and (5) regulating and overseeing essential functions. However, except for supporting critical infrastructure planning, federal coordination of these activities was generally not targeted towards or focused on the cybersecurity of state and local public safety entities involved in handling 911 emergency calls. Under the critical infrastructure protection planning activity, the Department of Homeland Security (DHS) coordinated with state and local governments and other federal stakeholders to complete the Emergency Services Sector-Specific Plan. The plan is to guide the sector, including the public safety entities, in setting protective program goals and objectives, identifying assets, assessing risks, prioritizing infrastructure components and programs to enhance risk mitigation, implementing protective programs, measuring program effectiveness, and incorporating research and development of technology initiatives into sector planning efforts. It also addressed aspects of cybersecurity of the current environment. However, the plan did not address the development and implementation of more interconnected, Internet-based planned information technologies, such as the next generation of 911 services. According to DHS officials, the plan did not address these technologies, in part, because the process for updating the sector-specific plan will begin after the release of the revised National Infrastructure Protection Plan--a unifying framework to enhance the safety of the nation's critical infrastructure. A revised plan was released in December 2013, and, according to DHS, a new sector-specific plan is estimated to be completed in December 2014. Until DHS, in collaboration with stakeholders, addresses the cybersecurity implications of the emerging technologies in planning activities, information systems are at an increased risk of failure or being unavailable at critical moments. Under the other four activities, federal agencies performed some coordination related activities for public safety entities including administering grants for information technology enhancements, sharing information about cyber-based attacks, and providing technical assistance through education and awareness efforts. For example, the Departments of Transportation and Commerce allocated $43.5 million in grants to states over a 3-year period, starting in September 2009, to help implement enhancements to 911 system functionality. While these grants were not targeted towards the cybersecurity of these systems, cybersecurity was not precluded from the allowed use of the funds. GAO recommends that the Secretary of Homeland Security collaborate with emergency services sector stakeholders to address the cybersecurity implications of implementing technology initiatives in related plans. DHS concurred with GAO's recommendation.
Currently, DOD has five major unmanned aircraft systems in use: the Air Force’s Predator A and Global Hawk, the Marine Corps’ Pioneer, and the Army’s Hunter and Shadow. The services also have developmental efforts underway, for example, the Air Force’s Predator B, the Army and Navy’s vertical take-off and landing system, and the Army’s Warrior. Overall, DOD now has about 250 unmanned aircraft in inventory and plans to increase its inventory to 675 by 2010 and 1,400 by 2015. The 2006 Quadrennial Defense Review reached a number of decisions that would further expand investments in unmanned systems, including accelerating production of Predator and Global Hawk. It also established a plan to develop a new land-based, long-strike capability by 2018 and set a goal that about 45 percent of the future long-range strike force be unmanned. DOD expects unmanned aircraft systems to transform the battlespace with innovative tactics, techniques, and procedures as well as take on the so- called “dull, dirty, and dangerous missions” without putting pilots in harm’s way. Potential missions for unmanned systems have expanded from the original focus on intelligence, surveillance, and reconnaissance to limited tactical strike capabilities. Projected plans call for unmanned aircraft systems to perform persistent ground attack, electronic warfare, and suppression of enemy air defenses. Unmanned aircraft fly at altitudes ranging from below 10,000 feet up to 50,000 feet and are typically characterized by approximate altitude—“low altitude” if operating at 10,000 feet or less, “medium altitude” if flying above 10,000 but below 35,000 feet, and “high altitude” if operating above 35,000 feet. The Army’s classifies Warrior as a medium-altitude system, in the same category as its Hunter system, its Warrior prototype known as I- GNAT, and the Air Force’s Predator A. The Air Force’s Predator B is expected to operate at both medium and high altitudes. The Warrior as envisioned by the Army shares some similarities with the Air Force’s Predator A and B models. First, all three systems share the same contractor, General Atomics. Second, Predator A and Warrior are expected to be somewhat similar in physical characteristics. In particular, the build of the main fuselage, the location of fuel bays, and design of the tailspar are alike. According to Army program officials, the Predator B and Warrior are expected to share the same flight controls and avionics. Predator A and Warrior are anticipated to perform some similar missions, including reconnaissance, surveillance, and target acquisition and attack. The development of the Warrior program began in late 2001 when the Army started defining requirements for a successor to its Hunter system. In September 2004, the Army released a request for a “systems capabilities demonstration” so that companies could demonstrate the capabilities of their existing aircraft. In December 2004, the Army awarded demonstration contracts worth $250,000 each to two contractors, Northrup Grumman and General Atomics. Subsequently, the Army evaluated, among other things, the demonstrated capabilities of the competitors’ existing aircraft in relation to Warrior technical requirements. The Army did not perform a formal analysis of the alternatives comparing expected capabilities of Warrior with current capabilities offered by existing systems; rather, its rationale was that the Warrior is needed near- term for commanders’ missions and considered this competition to be a rigorous analysis of available alternatives. Based on the competition, the Army concluded that General Atomics’ proposal (based on Warrior) provided the best value solution. In August 2005, the Army awarded the system development and demonstration (SDD) contract to General Atomics. The contract is a cost plus incentive fee contract with an award fee feature. It has a base value of about $194 million, with approximately another $15 million available to the contractor in the form of incentive fees, and about an additional $12 million available as award fees. The time line in figure 1 illustrates the sequence of past and planned events for the Warrior program. The Army plans for a full Warrior system to entail 12 aircraft as well as 5 ground control stations, 5 ground data terminals, 1 satellite communication ground data terminal, 12 air data terminals/air data relays, 6 airborne satellite communication terminals, 2 tactical automatic take-off and landing systems, 2 portable ground control stations, 2 portable ground data terminals and associated ground support equipment. The Army expects to buy 1 developmental system with 17 aircraft and 11 complete production systems with a total of 132 production aircraft through 2015. However, the Army has not yet decided on the number of systems it might buy beyond that date. The Army is employing an evolutionary acquisition strategy to produce Warrior. The Army expects the current Warrior program of record to provide for immediate warfighting needs and plans to build on the capabilities of this increment as evolving technology allows. The Army has an operational requirement, approved by the Joint Requirements Oversight Council, for an unmanned aircraft system dedicated to direct operational control by Army field commanders. The Army has determined that the Warrior was the best option available to meet this operational requirement. Army program officials believe that the Predator is operationally and technically mismatched with Army needs. The Army expects Warrior to offer key technical features that will better meet Army operational needs than Predator A. According to the Army, the Predator is operationally mismatched with its division-level needs. Army program officials noted that one of the Army’s current operational difficulties with Predator is that frontline commanders cannot directly task the system for support during tactical engagements. Rather, Predator control is allocated to Theater and Joint Task Force Commands, and the system’s mission is to satisfy strategic intelligence, reconnaissance, and surveillance needs as well as joint needs. Army programmatic and requirements documents maintain that Army division commanders in the field need direct control of a tactical unmanned aircraft asset capable of satisfying operational requirements for dedicated intelligence, surveillance, and reconnaissance, communications relay, teaming with other Army assets, and target acquisition and attack. Army program officials also indicated that Predator’s time is apportioned among various users, and the Army typically does not receive a large portion of that time. According to Warrior program documents, the Army has historically been able to draw only limited operational support from theater assets such as Predator. For example, a program office briefing noted that overall Iraq theater-level support was neither consistent nor responsive to Army needs, and that division level support was often denied or cancelled entirely. The briefing also said that the shortfall was expected to continue, even with the addition of more Predators and Global Hawks. Army program officials also told us that they expect Warrior to enhance overall force capability in ways that Predator cannot. Specifically, the Army expects Warrior to support teaming with Army aviation assets and aid these assets in conducting missions that commanders were previously reluctant to task to manned platforms. Under this teaming concept, manned assets, including the Apache helicopter, Army Airspace Command and Control system, and Aerial Common Sensor, would work jointly with Warrior to enhance target acquisition and attack capabilities. The Army plans for the manned platforms to not only receive data and video communications from Warrior but also control its payloads and flight. The Army also plans to configure Warrior for interoperability with the Army One System Ground Control Station, an Army-wide common ground control network for unmanned aircraft systems. According to Army documents, Warrior’s incorporation into this network will better support the Army ground commander by allowing control of Warrior aircraft to be handed off among ground stations, provide better battlefield coverage for Joint Forces, and ensure common operator training among unmanned aircraft systems, including the Army’s Warrior, Shadow, and Hunter and Marine Corps’ unmanned aircraft systems. Additionally, Army program officials pointed out that Warrior will be physically controlled by an enlisted soldier deployed in the theater where Warrior is being used. They contrast this with Predator, which is typically flown from a location within the continental United States by a pilot trained to fly manned aircraft. The Army believes that the Warrior design will offer key technical features to address Army operational requirements and maintains that these features will better meet its operational needs than those found on Predator A. The technical features include: multi-role tactical common data link, ethernet, heavy fuel engine, automatic take-off and landing system, more weapons, interoperability with Army One System Ground Control Station, and dual-redundant avionics. Table 1 shows the respective purpose of each technical feature, describes whether or not a particular feature is planned for Warrior and exists now on Predator A, and provides the Army’s assessment of operational impact provided by each feature. A February 2006 Warrior program office comparison of costs for Warrior and Predator A projects that Warrior’s unit cost will be $4.4 million for each aircraft, including its sensors, satellite communications, and Hellfire launchers and associated electronics. The cost comparison indicates that Predator A’s unit cost for the same elements is $4.8 million. Although the Air Force’s Predator B is planned to be more capable than Warrior in such areas as physical size and payload and weapons capacity, the Warrior program office estimates that it will have a unit cost of $9.0 million—about double the anticipated cost for Warrior. The Army’s cost estimates for the Warrior are, of course, predicated on Army plans for successful development and testing. In terms of technology maturity, design stability, and a realistic schedule, the Army has not yet established a sound, knowledge-based acquisition strategy for Warrior that is consistent with best practices for successful acquisition. Warrior is expected to rely on critical technologies that were not mature at the time of the system development and demonstration contract award in August 2005 and were still not mature in March 2006. Furthermore, it appears that the Army may be unable to complete development of these technologies and achieve overall design stability by the time of the design readiness review scheduled for July 2006. Moreover, the Warrior schedule is very aggressive and overlaps technology development, product development, testing, and production. For example, the Army plans to consider awarding a contract for procurement of long- lead items at a time when it is still unclear if Warrior will be technologically mature and have a stable design. Such concurrency adds more risk, including the potential for costly design changes after production begins, to the already compressed schedule. In the last several years, we have undertaken a best practices body of work on how leading developers in industry and government use a knowledge-based approach to develop high-quality products on time and within budget. A knowledge-based approach to product development employs a process wherein a high level of knowledge about critical facets of a product is achieved at key junctures known as “knowledge points.” This event-driven approach, where each point builds on knowledge attained in the previous point, enables developers to be reasonably certain that their products are more likely to meet established cost, schedule, and performance baselines. A key to such successful product development is an acquisition strategy that matches requirements to resources and includes, among other elements, a high level of technology maturity in the product at the start of system development and demonstration, design maturity at the system’s design readiness review usually held about half- way through the system’s development phase, and adequate time to deliver the product. Achieving a high level of technology maturity at the start of system development is an important indicator that a match has been made between the customer’s requirements and the product developer’s resources in term of knowledge, money, and time. This means that the technologies needed to meet essential requirements—known as “critical technologies”—have been demonstrated to work in their intended environment. Our best practices work has shown that technology readiness levels (TRL) can be used to assess the maturity of individual technologies and that a TRL of 7—demonstration of a technology in an operational environment—is the level that constitutes a low risk for starting a product development program. As identified by the Army, the Warrior program contains four critical technologies: (1) ethernet, (2) multi-role tactical common data link, (3) heavy fuel engine, and (4) automatic take-off and landing system. Two of the four critical technologies—ethernet and data link—were not mature at the time the Army awarded the Warrior system development and demonstration contract in August 2005, and in early 2006 remain immature at TRLs of 4. Army program officials told us that they project the ethernet to be at TRL 6 and the data link at TRL 5 or 6 by the time of the design readiness review scheduled for July 2006. However, it is not certain that these two technologies will be as mature at design readiness review as the Army anticipates. Army program officials indicated that the data link hardware is still in development and expect its integration with other Warrior components to be a challenge. As such, they rated data link integration status as a moderate risk to the Warrior program. While they stated that use of the ethernet has been demonstrated on Army helicopters and should not be a technical integration challenge, the officials also said that neither the ethernet nor specific data link technologies to be used on Warrior has been integrated previously onto an unmanned aircraft platform. Further, if the technologies are demonstrated at TRL 6 by design readiness review, they will meet DOD’s standard for maturity (demonstration in a relevant environment) but not the best practices maturity standard of TRL 7 (demonstration in an operational environment). The Army has technologies in place as backups for the data link and ethernet, but these technologies would result in a less capable system than the Army originally planned. According to Army program officials, there are several potential backups for the data link that could be used on the Warrior aircraft. Among the backups they cited is the same data link used on the Predator A–analog C-band. However, as we noted in a report last year, C-band is congested, suffers from resulting delays in data transmission and relay, and the Department of Defense has established a goal of moving Predator payloads from this data link. Similarly, the other data link backups cited by the officials either had slower data transmission rates or also were not yet mature. Program officials indicated that the backup for the ethernet is normal ground station control of the on-board communication among such components as the payloads, avionics, and weapons. While they stated that there would be no major performance penalty if the backup was used, they did note that the ethernet would significantly improve ease of integrating payloads and of integrating with other Army assets that might need control of a Warrior payload to support missions. The other two critical technologies, the automatic take-off and landing system and the heavy fuel engine, are mature at respective TRLs of 7 and 9. Nevertheless, some program risk is associated with these technologies as well. The contractor has never fielded an automatic take-off and landing component on an unmanned aircraft system. Army program officials told us that they are confident about the take-off and landing system because a similar landing system had been fielded on the Shadow unmanned aircraft, but they also indicated that the take-off component has not been fielded on an unmanned aircraft. The officials also expressed confidence in the heavy fuel engine because it is certified by the U.S. Federal Aviation Administration and is in use on civilian manned aircraft. However, like the complete take-off and landing system, it has not previously been integrated onto an unmanned aircraft. Best practices for successful acquisition call for a program’s design stability to be demonstrated by having at least 90 percent of engineering drawings completed and released to manufacturing at the time of the design readiness review. If a product’s design is not stable as demonstrated by meeting this best practice, the product may not meet customer requirements and cost and schedule targets. For example, as we reported previously, the Army’s Shadow unmanned aircraft system did not meet best practices criteria because it had only 67 percent of its design drawings completed when the system entered low-rate production. Subsequent testing revealed examples of design immaturity, especially relating to system reliability, and ultimately the Army delayed Shadow’s full-rate production by about 6 months. The Warrior program also faces increased risk if design drawings do not meet standards for best acquisition practices. The Warrior program office projects that Warrior’s design will be stable and that 85 percent of drawings will have been completed and released to manufacturing by the time of the design readiness review in July 2006. However, it seems uncertain whether the Warrior program will meet this projection because percentages of drawings complete for some sub-components were still quite low in early 2006 and, in some cases, have declined since the system development and demonstration contract award. For example, according to an Army program official, the percentage of completed design drawings for the aircraft and ground control equipment dropped after contract award because the Army made modifications to the planned aircraft and also decided that it needed a larger transport vehicle for the Warrior’s ground control equipment. The Warrior program appears driven largely by schedule rather than the attainment of event-driven knowledge points that would separate technology development from product development. The latter approach is characteristic of both best practices and DOD’s own acquisition policy. Warrior’s schedule is compressed and aggressive and includes concurrency among technology development, product development, testing, and production. Concurrency—the overlapping of technology and product development, testing, and production schedules—is risky because it can lead to design changes that can be costly and delay delivery of a useable capability to the warfighter if testing shows design changes are necessary to achieve expected system performance. As shown in figure 2, the Warrior schedule overlaps technology development, product development, testing, and production. The following examples highlight some of the concurrency issues within the Warrior program: Thirty-two months have been allotted from the system development and demonstration contract award in August 2005 to the low-rate production decision in April 2008. Out of that, 10 months—from July 2006 to May 2007—are set aside for integrating system components (including all four critical technologies) into the aircraft. Two of these technologies are not yet mature (as of early 2006); none of the specific technologies as planned to be used on Warrior have previously been fully integrated onto an unmanned aircraft. The Army plans to continue integration through May 2007 would seem to undermine the design stability expected to be achieved at the July 2006 design readiness review. Ideally, system integration is complete by that time. Delivery of 17 developmental aircraft is to take place within a 12-month period from April 2007 to April 2008, and the Army plans for them to undergo developmental testing as they are delivered. It is unclear whether all components will be fully integrated for this testing, but the results of some tests should be available when the Army considers approval of long-lead items for the first lot of low-rate initial production in August 2007. The Army is requesting about $31 million in fiscal 2007 to procure long-lead items, including items associated with the automatic take-off and landing system, heavy fuel engine assembly, and ground control. Prior to the planned approval of the first lot in fiscal 2008, the developmental aircraft will be evaluated in a limited user test. The Warrior program office acknowledges that the schedule is high-risk. Additionally, according to Army program officials, both the program office and contractor recognize that there are areas of moderate to high risk within the program, including integration of the tactical common data link as well as timely availability of a modified Hellfire missile and synthetic aperture radar used for visibility in poor atmospheric conditions. Army program officials told us that they are trying to manage Warrior as more of a knowledge-based, event-driven rather than schedule-driven program. As an example, they stated that the contractor is currently building two off- contract aircraft to help mitigate risk by proving out design, development, and manufacturing. However, they also told us that these two aircraft would not include the tactical common data link, Hellfire missile, synthetic aperture radar, or satellite communications used for relay purposes. They noted that some of these items are still in development so are not expected to be available, but they do plan for the two aircraft to have the ethernet, heavy fuel engine, and automatic take-off and landing system. In concept, the Army has determined that the Warrior will meet its operational requirements better than available alternatives such as the Predator. In practice, however, the Warrior might very well encounter cost, schedule, and performance problems that would hinder it from attaining the Army’s goals. Half of its critical technologies are not yet mature, and its design is not yet stable. Compounding this, its aggressive schedule features extensive concurrency among technology development and demonstration, design integration, system demonstration and test, and production, leaving little time to resolve technology maturity and design stability issues by testing. If the Warrior program continues forward prior to attaining adequate technology and design, it may well produce under- performing Warrior aircraft that will not meet program specifications. The program may then experience delays in schedule and increased costs. The next key program event with significant financial implications is the scheduled approval of long-lead items for the initial lot of Warrior low-rate initial production in August 2007. That will be the first use of procurement funding for Warrior. We believe that is a key point at which the Army needs to demonstrate that the Warrior program is knowledge-based and better aligned to meet program goals within available resources than it currently appears. We recommend that the Army not approve long-lead items for Warrior low-rate initial production until it can clearly demonstrate that the program is proceeding based on accumulated knowledge and not a predetermined schedule. In particular, we recommend that, prior to approving the Warrior long-lead items for low-rate initial production, the Secretary of the Army require that critical Warrior technologies are fully mature and demonstrated; Warrior design integration is complete and at least 90 percent of design drawings be completed and released to manufacturing; and fully-integrated Warrior developmental aircraft are fabricated and involved in development testing. DOD provided us with written comments on a draft of this report. The comments are reprinted in Appendix I. DOD concurred with one part of our recommendation but not with the other two parts. DOD also provided technical comments, which we incorporated where appropriate. DOD concurred with the part of our recommendation that it should seek to have at least 90 percent of design drawings completed and released to manufacturing prior to procuring long-lead items for Warrior's low-rate initial production. However, DOD also said that the decision to procure long-lead items will not be based solely on the percentage of drawings completed, but also on the schedule impact of unreleased drawings. DOD did not concur with the rest of our recommendation that, prior to approval of long-lead items for Warrior's low-rate initial production, the Secretary of the Army needed to ensure (a) critical Warrior technologies are fully mature and demonstrated and (b) fully-integrated Warrior developmental aircraft are fabricated and involved in development testing. Although DOD agreed that two critical technologies are less mature than the others within the Warrior system, it also stated that these technologies are at the correct levels to proceed with integration. However, the Warrior program is nearing the end of integration and is about to begin system demonstration, signified by the July 2006 design readiness review. In that review, the design is set to guide the building of developmental aircraft for testing. These developmental aircraft will be used to demonstrate the design in the latter half of System Development and Demonstration. While DOD stated that risk mitigation steps are in place, including possible use of back-up technologies, if either of the two critical technologies is not ready for integration, the decisions on whether to use back-up technologies in the design would ideally have been made by the design readiness review. Even if the two critical technologies mature by that point, they would still have to be integrated into the design, as would the back-up technologies if DOD chose to use those instead. To the extent that technology maturation and integration extend beyond the design readiness review, the program will incur the risk of integrating the design at the same time it is attempting to build developmental aircraft to demonstrate the design. Our recommendation to make the technology decision before committing to long-lead items provides a reasonable precaution against letting the technology risks proceed further into the demonstration of the developmental aircraft and into the purchase of production items. Making the technology decision as early as possible is particularly important given that the program schedule allows no more than a year to demonstrate the design with the developmental aircraft before committing to production. Our past work has shown that increased costs and schedule slippages may accrue to programs that are still maturing technologies well into system development when they should be focused on stabilizing system design and preparing for production. With regards to the part of our recommendation that fully integrated development aircraft are fabricated and involved in developmental testing prior to approval of long-lead items, DOD indicated that modeling and simulation, block upgrades, early operational deployments, and early testing will enable the Department to mitigate design and performance risks while remaining on schedule. While we agree that these activities help reduce risk, the most effective way to reduce risk is to verify the design through testing of fully-integrated developmental aircraft before committing to production. Our recommendation underscores the value of conducting such testing, which can still be done if technology decisions are made early. Our work over the past several years has shown that a knowledge-based acquisition strategy consistent with best practices can lead to successful outcomes. Specifically, proceeding without mature technologies and a stable design can lead to costly design changes after production is underway and negatively impact funding in other Department programs, ultimately affecting DOD’s ability to respond to other warfighter needs. To address the first objective, to identify the requirements that led to the Army’s decision to acquire Warrior, we reviewed Army operational requirements, acquisition strategy, and other programmatic documents and briefings. We did not assess the validity of the Army’s requirements for Warrior. We also reviewed the process the Army used in selecting Warrior. In comparing Warrior to existing unmanned systems in the inventory, we limited our review to comparable medium-altitude systems within the military services. To assess differences in operational capabilities for Warrior and Predator, we reviewed operations-related documents for Predator A and B. We also reviewed critical technologies as well as other key technical features of the respective systems that highlighted differences in Warrior and Predator A capabilities. To address the second objective, whether the Army established a sound acquisition strategy for Warrior, we reviewed planning, budget, and programmatic documents. We also utilized GAO’s “Methodology for Assessing Risks on Major Weapon System Programs” to assess the Army’s acquisition strategy with respect to best practices criteria. The methodology is derived from the best practices and experiences of leading commercial firms and successful defense acquisition programs. We also used this methodology to review risks within the Warrior program, but we did not focus our assessment on all risk areas the Army and Warrior contractor identified within the program. Instead, we focused on those risk areas that seemed most critical to the overall soundness of the Army’s acquisition strategy. To achieve both objectives, we interviewed Army officials and obtained their views of the Army’s requirements and soundness of the Army’s acquisition strategy. We also incorporated information on Warrior from GAO’s recent Assessments of Major Weapon Programs. We performed our review from September 2005 to April 2006 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and the Secretary of the Air Force, and interested congressional committees. We will also make copies available to others upon request. Additionally, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me on 202-512-7773. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were William R. Graveline, Tana Davis, and Beverly Breen.
Through 2011, the Department of Defense (DOD) plans to spend $20 billion on unmanned aircraft systems, including the Army's "Warrior." Because of congressional concerns that some systems have been more costly and taken more time to produce than predicted, GAO reviewed the Warrior program. This report (1) describes the Army's requirements underlying its decision to acquire Warrior instead of existing systems such as the Air Force's Predator, and (2) assesses whether the Army has established a sound acquisition strategy for the Warrior program. The Army determined the Warrior is its best option for an unmanned aircraft system directly controlled by field commanders, compared with existing systems such as the Air Force's Predator A. The Army believes that using the Warrior will improve force capability through teaming with other Army assets; using common ground control equipment; and allowing soldiers in the field to operate it. Warrior's key technical features include a heavy fuel engine; automatic take-off and landing system; faster tactical common data link; ethernet; greater carrying capacity for weapons; and avionics with enhanced reliability. The Army projects that Warrior will offer some cost savings over Predator A. In terms of technology maturity, design stability, and a realistic schedule, the Army has not yet established a sound, knowledge-based acquisition strategy for Warrior. Two of four of the Warrior's critical technologies were immature at the contract award for system development and demonstration and remain so in early 2006, and the mature technologies still have some risk associated with them because neither has previously been fully integrated onto an unmanned aircraft. The Warrior schedule allows 32 months from award of the development and demonstration contract to the initial production decision. Achieving this schedule will require concurrency of technology and product development, testing, and production. Once developmental aircraft are available for testing, the Army plans to fund procurement of long-lead items in August 2007. Experience shows that these concurrencies can result in design changes during production that can prevent delivery of a system within projected cost and schedule. The Warrior program faces these same risks.
Congress passed the Trafficking Victims Protection Act of 2000 to combat trafficking in persons. As the centerpiece for U.S. antitrafficking efforts, the TVPA advanced a three-pronged victim-centered approach— prevention of trafficking, protection and assistance for victims of trafficking, and prosecution and punishment of traffickers. Among its provisions, the TVPA addressed identified gaps in existing law and enhanced the tools available to pursue these crimes. Specifically, the act criminalized the obtaining or maintaining of persons for commercial sexual activity, using force, fraud, or coercion for those 18 or over (but not for those under 18), and to use certain kinds of force or coercion to provide or obtain persons for any labor or services (e.g., work in farms, factories, and households). It also included nonviolent coercion and threats of harm to third persons in federal involuntary servitude laws; made attempted trafficking crimes punishable; criminalized the holding of actual or purported identity documents in the course of committing, or with the intent to commit, any trafficking crime; and increased the maximum penalty for slavery and involuntary servitude offenses from 10 to 20 years or to a life sentence if the offense involved factors like death, kidnapping, or aggravated sexual abuse. In addition, the TVPA required restitution for victims of trafficking and forfeiture of traffickers’ assets and provided legal status and special benefits to aliens certified as trafficking victims in the United States who are willing to assist law enforcement efforts against traffickers. (App. II identifies specific statutory provisions relevant to investigating and prosecuting trafficking in persons crimes.) Responsibilities for pursuing trafficking crimes fall to multiple federal agencies, including the FBI and ICE, which investigate these crimes; CRT/CS, CEOS, and U.S. Attorney’s Offices, which prosecute traffickers; and other agencies within DHS and DOJ and components of DOL and DOS that support U.S. efforts to investigate and prosecute trafficking in persons. Figure 1 depicts these key agencies and their respective responsibilities related to the investigation and prosecution of trafficking in persons crimes. In addition, to coordinate the implementation of the TVPA, the act directed the President to establish an Interagency Task Force to Monitor and Combat Trafficking in Persons and authorized the Secretary of State to create the Office to Monitor and Combat Trafficking in Persons to provide assistance to the task force. In February 2002, the President issued an executive order creating this cabinet-level task force and then in December issued National Security Presidential Directive 22, which identified trafficking in persons as an important national security issue and directed federal agencies to strengthen their collective efforts, capabilities, and coordination to support the goal of abolishing human trafficking. Subsequently, the 2003 TVPA reauthorization statutorily established the Senior Policy Operating Group (SPOG) to address interagency policy, program, and planning issues regarding the TVPA’s implementation. In addition, HSTC, which is staffed by detailees from DHS, DOJ, DOS, and the intelligence community, among other places, collects and disseminates intelligence information to build a comprehensive picture of human trafficking. Pursuing trafficking investigations and prosecutions also needs the support of state and local law enforcement, who may be in the best position to find trafficking victims because of their familiarity with their respective jurisdictions, and nongovernmental organizations, from whom victims may more readily seek assistance. To leverage these resources to support federal efforts to investigate and prosecute trafficking in persons, DOJ designed, developed, and instituted a task force approach that it presented during the first National Training Conference on Human Trafficking: Rescuing Women and Children from Slavery, held in Tampa, Florida, in July 2004. DOJ invited 21 teams of 20 federal, state, and local law enforcement and nongovernmental service providers from communities that it believed to have potential trafficking problems to attend the conference. After the conference, the teams were expected to work together on human trafficking in their respective communities. To implement the approach, BJA, the DOJ component responsible for supporting local, state, and tribal efforts to achieve safer communities, developed and implemented a human trafficking law enforcement task force competitive grant program. These grants were to be awarded to state or local police agencies that work with the local U.S. Attorney’s Office, federal law enforcement entities, and nongovernmental organizations that may come into contact with victims of trafficking. In addition, in spring 2003, the FBI’s Crimes Against Children Unit, DOJ’s Child Exploitation and Obscenity Section, and the National Center for Missing and Exploited Children launched the Innocence Lost National Initiative in 14 U.S. cities where the FBI field offices had identified a high incidence of trafficking of U.S. children for commercial sex. As trafficking in persons is a transnational crime, federal agencies may need to obtain information and assistance directly from individual foreign governments and through international law enforcement organizations in order to investigate and prosecute trafficking in persons cases in the United States. Multilateral and extradition treaties provide the authority for U.S. investigative and prosecutorial agencies to request information and assistance on criminal cases, including trafficking in persons, from approximately 175 individual foreign governments. Working through ICE and FBI personnel stationed at U.S. embassies, U.S. investigative and prosecutorial agencies have obtained a broad spectrum of assistance from individual foreign governments and with such assistance have successfully prosecuted traffickers. This assistance has included obtaining documentary evidence and corroborating witness testimony, protecting U.S. trafficking victims’ family members in a foreign country, apprehending fugitive traffickers, and extraditing defendants. In addition, U.S. agencies may obtain information through the International Criminal Police Organization, Interpol, which serves as a conduit for a cooperative exchange of information on criminal activities from its 186 member countries. Subsequent to the enactment of the TVPA, federal agencies reported 139 prosecutions and hundreds of investigations of trafficking for commercial sex or labor as of June 2007. To support federal efforts to identify victims and investigate and prosecute these crimes, agencies (1) provided training to agency personnel to raise awareness and increase the skills needed to identify victims and pursue trafficking investigations and prosecutions, (2) carried out outreach and training to raise public awareness of and skills in identifying trafficking victims, and (3) engaged state and local knowledge and resources by funding state and local trafficking in persons task forces and developing and disseminating a model state law. In addition, to address their responsibilities related to trafficking in persons crimes, some agencies have established special units, agency-level goals, or plans or strategies. Federal investigative and prosecutorial agencies have generally drawn from existing resources to carry out these efforts (app. III provides information on resources). With the enhanced tools available to federal investigators and prosecutors as a result of the enactment of the TVPA, federal agencies reported a general increase in the number of prosecutions and investigations of trafficking in persons crimes. These data are an indicator of the level of agency effort in pursuit of these crimes, since fiscal year 2001, although they are limited by a number of factors. Trafficking crimes and their victims are hidden and not readily identifiable. Traffickers may be charged or convicted of other than trafficking crimes (e.g., kidnapping, immigration violations, or money laundering) for strategic or technical reasons. Also, limitations of agency data systems, which are primarily case management systems, may not allow for the extraction of trafficking data per se. In addition, availability of individual agencies’ data may be limited by factors pertinent to that agency; for example, ICE was only established in 2003. Moreover, agency data are not comparable across agencies nor can data on investigations be linked to data on prosecutions. As a result of these limitations, however, the actual number of investigations and prosecutions that have led to the incapacitation of traffickers may be greater than the numbers that have been reported by federal agencies. CRT/CS reported 139 prosecutions from fiscal year 2001 to June 14, 2007, as compared with 19 cases for fiscal years 1995 to 2000. These cases included 39 defined by CRT/CS as labor trafficking and 100 as trafficking for commercial sexual activity. According to CRT/CS officials, the number of prosecutions varies in any given year, because of differences in the complexity of the cases. (See app. IV for illustrations of the complexity of cases.). FBI and ICE provided data on numbers of trafficking cases opened. The FBI’s Civil Rights Unit reported opening a total of 751 trafficking in persons cases between fiscal year 2001 and April 5, 2007. However, these data do not include investigations involving trafficking that are classified as other types of crime, for example, alien smuggling cases that also involve trafficking in persons. ICE reported opening a total of 899 trafficking in persons cases, for fiscal year 2005 through May 31, 2007. Both FBI and ICE data may include cases involving investigations handled jointly by the two agencies. In addition, as part of the Innocence Lost National Initiative, the FBI’s Crimes Against Children Unit reported 327 cases opened on trafficking of U.S. children for commercial sex between fiscal year 2004 and June 5, 2007. Appendix III presents additional data related to trafficking in persons investigations and prosecutions; including arrests; indictments; convictions; and restitution to the victims, as required under the TVPA, where appropriate. National Security Presidential Directive 22 directed federal departments and agencies to ensure that all appropriate offices within their jurisdiction were fully trained to carry out their specific responsibilities to combat trafficking, including interagency cooperation and coordination on the investigation and prosecution of trafficking. FBI, ICE, CRT/CS, CEOS, and DOL all reported taking steps to ensure that their personnel received appropriate training, using a variety of means to do so, including the following: training for new agents through the ICE and FBI training academies; a Web-based training module, which is available to ICE agents through ICE’s intranet; guidance to ICE domestic and international field offices about conducting outreach, training, and coalition building; training conference sessions by the FBI Civil Rights Unit and information on trafficking in the FBI’s civil rights reference guide for FBI agents; training of U.S. Attorneys and other prosecutors on trafficking in persons and trafficking of U.S. children for commercial sex, at the National Advocacy Center; guidance to all U.S. Attorneys’ Offices about prosecuting under the TVPA, a tool kit for prosecutors, and a law guide, developed by CRT/CS; training for victim-witness coordinators, who are the federal government’s liaisons to victims of federal crimes, and updating the Attorney General’s victim/witness guidelines to include trafficking in persons; a nationwide televised human trafficking training initiative on the Justice Television Network (JTN), initiated by CRT/CS in 2006 and continuing in 2007, transmitted from the National Advocacy Center to all 94 U.S. Attorneys’ Offices. These offices and BJA-funded state and local human trafficking task forces hosted members of the law enforcement and nongovernmental organization communities to view these programs; and a week-long seminar on investigating and prosecuting cases involving child sex trafficking, developed by CEOS, FBI, and the National Center for Missing and Exploited Children for the joint training of state and federal law enforcement agencies, prosecutors, and social service providers in targeted cities. This seminar is given multiple times each year. In addition, to help identify victims of trafficking and support federal efforts to pursue trafficking investigations, agencies have used a variety of means to extend outreach and training to state and local law enforcement, nongovernmental organizations, and the general public. ICE developed laminated wallet-size cards, in five languages, identifying the differences between human smuggling and human trafficking as well as red flag indicators for human trafficking and also developed a police roll call/muster DVD describing human trafficking. CRT/CS publishes a newsletter on trafficking, available on the DOJ Web site, and in collaboration with other federal agencies and DOJ components, prepared and published the Report on Activities to Combat Trafficking: Fiscal Years 2001-2005. DOJ’s Office of Legal Policy prepares the Attorney General’s annual report to Congress on U.S. efforts to combat trafficking, as required by the TVPA of 2003, and the annual assessment of those efforts. DOJ also established, and subsequently permanently funded, a toll- free Trafficking in Persons and Worker Exploitation Complaint Line in February 2000 to provide a means for victims, witnesses, and others to report potential trafficking matters to law enforcement, get information, and obtain referrals to services in their area. In 2004 and 2006, federal agencies sponsored and participated in national conferences on human trafficking in Tampa, Florida, and New Orleans, Louisiana, respectively. In 2006, CRT/CS, with the Attorney General, produced the film Give Us Freedom: Liberty and Justice for Victims of Modern Day Slavery. To further U.S. investigations and prosecutions of trafficking in persons crimes, federal agencies have also fostered antitrafficking efforts at the state and local levels. For example, federal agencies have sought to engage state and local law enforcement and nongovernmental organizations by funding the establishment of state and local trafficking in persons task forces that bring together local law enforcement, federal law enforcement, a U.S. Attorney, and nongovernmental victim service providers. In addition, to expand antitrafficking law enforcement authority and promote a uniform national legal strategy to combat human trafficking, DOJ developed a model state law, available on the DOJ Web site. According to DOJ, at the time of its initial dissemination in 2004, 4 states—Texas, Florida, Missouri, and Washington—had laws against trafficking in persons. As of June 2007, 31 states had enacted antitrafficking in persons legislation. National Security Presidential Directive 22 directed all federal agencies to develop and promulgate plans to implement the directive by March 2003. Plans for DOJ, DHS, DOL, and DOS enumerate activities relevant to the investigation and prosecution of trafficking in persons. Additionally, some agencies have undertaken various steps to address their respective responsibilities related to the investigation and prosecution of trafficking in persons, including establishing special units that focus on trafficking in persons, agency-level goals, or plans or strategies. In doing so, each of these agencies has defined its responsibilities for pursuing trafficking crimes in accordance with its broader agency mission. Both ICE and CRT/CS have established specialized units focused on trafficking in persons. The ICE Office of Investigations’ Human Smuggling and Trafficking Unit, consisting of a unit chief, with a staff consisting of program managers who oversee programmatic and operational issues globally, and victim witness coordinators, oversees ICE’s efforts to identify criminal smuggling and trafficking organizations, prioritizes investigations based on risk factors, coordinates field office investigations into those targeted organizations, and coordinates victim assistance through approximately 300 of ICE’s collateral-duty victim witness coordinators. On January 31, 2007, the Attorney General and the Assistant Attorney General for the Civil Rights Division announced the formation of a special Human Trafficking Prosecution (HTP) Unit within CRT/CS. According to CRT/CS officials, the unit is to continue to play a role in coordinating intra-DOJ and interagency trafficking efforts (e.g., with ICE); develop new strategies to increase human trafficking investigations and prosecutions throughout the nation; enhance DOJ’s investigations and prosecutions of trafficking crimes by pursuing cases that are multijurisdictional or involve financial crimes; and also continue to engage in training, technical assistance, and outreach initiatives to federal, state, and local law enforcement and nongovernmental organizations. The primary investigative agencies for trafficking in persons have laid out goals and activities for combating this crime. The top goal of ICE’s trafficking in persons efforts—to disrupt and dismantle criminal organizations involved in trafficking, including intelligence gathering on these organizations—is aligned with DHS strategic goals of assessing vulnerabilities and mitigating threats to the homeland. In addition, ICE’s trafficking goals of seizing assets of criminal organizations and rescuing and protecting victims of trafficking follow ICE’s top goal. The FBI’s Strategic Plan 2004-2009 identifies investigations of trafficking in persons crimes as a rising priority under its responsibility to enforce civil rights protections. In addition, the FBI Civil Rights Unit specifies the strengthening of its intelligence base on trafficking activity as a top priority among its programmatic goals and emphasizes coordination with other law enforcement entities and partnerships with nongovernmental organizations in pursuing trafficking investigations. Furthermore, both ICE and FBI have disseminated guidance on handling trafficking cases to their agents in the field. In December 2006, the ICE Director of Investigations disseminated to Special Agents in Charge (SACs) and ICE personnel assigned to U.S. embassies the ICE Office of Investigations’ new strategy document for combating trafficking in persons, entitled ICE Trafficking In Persons Strategy, or ICE TIPS. ICE TIPS emphasizes outreach and education on ICE’s role in trafficking investigations and ability to issue Continued Presence, a mechanism for authorizing victims without legal immigration status to remain in the United States; collaborations with other law enforcement entities and nongovernmental service providers, including task force participation; and performance evaluation to focus and refine ICE’s efforts. In May 2007, additional guidance from the ICE Office of Investigations and the ICE Office of International Affairs was sent to SACs and ICE personnel assigned to U.S. embassies overseas. The guidance provided direction on outreach, training, coordination, and coalition building and mandated periodic reporting of efforts to ICE headquarters. The FBI’s guidance is contained in its Civil Rights Program Reference Guide, the fiscal year Civil Rights Program Plan, and memorandums to the field. The fiscal year 2007 Civil Rights Program Plan provides information similar to that contained in ICE TIPS and encourages working partnerships with other law enforcement entities and nongovernmental service providers, including providing training to these groups. As the lead prosecutorial agency for trafficking in persons, CRT/CS identified three levels of strategic planning for its trafficking efforts. DOJ’s Strategic Plan (Fiscal Years 2003-2008) lays out broad goals and performance measures. Specifically, CRT/CS’s efforts on trafficking in persons fall under goal two—enforce federal laws and represent the rights and interests of the American people, strategic objective 2.4—to uphold the civil and constitutional rights of all Americans and protect vulnerable members of society. According to the strategy, the Civil Rights Division intends to protect new immigrants to America by, among other things, vigorously prosecuting those who exploit their vulnerability through trafficking in persons, including increasing efforts to combat the criminal trafficking of children and other vulnerable victims, through more intensified efforts and interagency coordination. To achieve DOJ’s strategic goals and objectives, CRT/CS’s fiscal year 2007 internal priorities document lays out activities to be undertaken in three areas— investigation and prosecution; outreach and training; and policy development, including intergovernmental coordination. In addition, DOJ communicates direction and guidance on handling trafficking in persons cases through internal DOJ memorandums between CRT/CS and U.S. Attorneys, including guidance to U.S. Attorneys on how to prosecute trafficking cases, memorandums between CRT/CS and the FBI, and memorandums between DOJ and other federal agencies. In addition, DOL’s Wage and Hour Division has an internal plan that addresses its role in federal interagency trafficking efforts. The plan presents current goals and measures for the division’s involvement with human trafficking task forces in investigations, as appropriate with its mission, and in assisting trafficking victims in securing restitution, as well as long-term goals and measures for increasing these efforts. Recognizing that investigating and prosecuting trafficking cases can be complex and multifaceted activities, federal agencies have taken steps to coordinate their efforts to leverage the expertise and resources required to resolve these crimes. Coordination of investigations and prosecutions has usually occurred as determined by the needs of the individual cases and personal relationships established between law enforcement officials across agencies. However, DOJ and DHS officials acknowledged the need to expand the scope of their efforts to investigate and prosecute trafficking crimes by, for example, undertaking proactive measures to identify trafficking victims and multijurisdictional and international trafficking in persons investigations and prosecutions. Pursuing such efforts requires more strategic collaboration among agencies, since no one agency can carry out these efforts alone. Our prior work has shown that a strategic framework that would include, at a minimum, a common outcome and mutually reinforcing strategies; agreed-on roles and responsibilities; and compatible polices, procedures, and other means to operate across agency boundaries can help agencies enhance and expand collaboration on issues that are national in scope and cross agency jurisdictions. However, the mechanisms that are currently in place to facilitate interagency cooperation on human trafficking do not address the greater collaboration needed for the expanded level of effort to investigate and prosecute trafficking crimes. Establishing such a strategic framework to investigate and prosecute trafficking in persons crimes, as developed by federal agencies to address the unique challenges posed by these crimes, could help federal agencies enhance and sustain the collaboration needed to expand their efforts to combat trafficking crimes. According to DOJ and DHS officials, in practice, agency coordination of investigations and prosecutions of trafficking in persons has occurred on a case-by-case basis. CRT/CS, CEOS, ICE, and FBI officials acknowledged that investigating and prosecuting trafficking in persons crimes made it necessary for federal agencies to work with one another and with state and local law enforcement, who were often the first ones to discover possible evidence of trafficking, and with nongovernmental organizations that provided assistance to the victims. Federal officials emphasized that they knew whom to call; for example, the victim-witness coordinators in ICE and CRT/CS know each other; ICE and FBI investigators knew the names of prosecutors in CRT/CS. ICE and FBI officials explained that they sometimes worked joint investigations or investigated different aspects of a case. For example, in one case, while ICE agents rescued the victims in one location, the FBI was investigating related brothel operations in other cities. Through their detailees to the HSTC, ICE and FBI may determine whether the two agencies may be working on a related case. Agents in the field may also contact their counterparts at other agencies to ascertain whether they are working on a similar case. DOL Wage and Hour Division officials told us that if they identified a potential trafficking situation, they would notify the FBI and the respective U.S. Attorney and the FBI might take over responsibility for the case, as DOL’s Wage and Hour Division does not carry out criminal investigations related to trafficking in persons. In addition, victim witness coordinators across DOJ and DHS are in regular contact with each other to ensure victim care and services from the point of victim identification through investigation and prosecution. Investigative and prosecutorial agencies also work with nongovernmental agencies. For example, ICE officials said that they shared information with nongovernmental organization interviewers who helped the investigators determine which potential trafficking victims were actual victims and which were “victim enforcers” who were swept up in the raid but worked for the traffickers. CRT/CS and U.S. Attorney’s Offices prosecute the cases developed by the investigative agencies. In addition, under the auspices of the Innocence Lost National Initiative, FBI investigators from its Crimes Against Children Unit, the National Center for Missing and Exploited Children, and CEOS prosecutors have joined forces with state and local law enforcement through the establishment of formal or ad hoc task forces in 23 cities across the country, as a grassroots operation to work on cases of trafficking of U.S. children for commercial sex. Two noteworthy trafficking cases illustrate the breadth and diversity of coordination and cooperation that occurs in pursuit of these crimes. For example, the prosecution of Kil Soo Lee brought together FBI investigators, DOL investigators from the Wage and Hour Division and the Occupational and Safety and Health Administration, CRT/CS prosecutors, and some nongovernmental organizations and resulted in the largest trafficking case brought to date. In a separate case, Gerardo Flores Carreto and Josue Flores Carreto were each sentenced to 50 years in prison in a case involving coordination among ICE, DOJ, international nongovernmental organizations, as well as the Mexican government. (See app. IV.) In addition to interacting as needs emerge, officials told us that various law enforcement procedures and protocols are in place to foster coordination. Upon initiating a trafficking in persons investigation, ICE and the FBI notify the local U.S. Attorney to determine if enough evidence exists to pursue a federal trafficking in persons prosecution. Moreover, U.S. Attorneys are required to report civil rights cases, including trafficking in persons cases, to CRT/CS, which then determines whether to accept the U.S. Attorney’s staffing recommendation. In addition, DHS, DOJ, and the Department of Health and Human Services signed a memorandum of understanding that lays out the basic responsibilities and functions of the departments as they relate to the certification of victims’ eligibility for certain federal benefits. Federal agencies have also developed tools to facilitate interagency coordination and even coordination with state and local law enforcement and nongovernmental organizations in trafficking cases, usually on a case- by-case basis. According to DOJ and DHS officials, training is provided to these stakeholders prior to raids; operations manuals are prepared both for law enforcement and victim-witness coordinators. Although federal agencies have successfully coordinated on a case-by-case basis to investigate and prosecute trafficking crimes, officials described their approach to trafficking investigations and prosecutions as usually being reactive and acknowledged the need for additional proactive approaches to enhance interagency efforts to investigate and prosecute trafficking crimes. DOJ and DHS senior officials identified the need to expand the scope of efforts, including taking proactive measures to identify trafficking victims (e.g., expanding outreach to additional law enforcement agencies and nongovernmental organizations) and pursuing multijurisdictional and international trafficking in persons investigations and prosecutions. These efforts require more strategic collaboration among agencies, since no one agency has the authority to carry out these efforts alone. However, the current coordinating mechanisms and National Security Presidential Directive 22 do not address the greater collaboration needed for this level of expanded effort, and individual agency plans only address individual agency efforts—none of which is linked to a common governmentwide outcome to address the investigation and prosecution of trafficking crimes. Additionally, differing perceptions among agencies exist on leadership and roles and responsibilities surrounding some of these expanded efforts. As our previous work has shown, a strategic framework that includes agencies working together toward a common outcome with mutually reinforcing strategies, agreed-on roles and responsibilities, and compatible policies and procedures can help enhance and sustain collaboration among federal agencies dealing with issues, such as trafficking in persons, that are national in scope and cross agency jurisdictions. In light of the unique challenges posed by trafficking in persons investigations and prosecutions, we acknowledge that a framework to address the investigation and prosecution of trafficking crimes needs to be flexible and incorporate different types of collaborative mechanisms. The agencies involved would determine the specifics of the elements enumerated above, any additional elements to be included in the framework, and the structures for developing and implementing such a framework. DOJ and DHS officials acknowledged the need to expand the scope of U.S. efforts to combat trafficking crimes by developing proactive approaches to identify trafficking victims (e.g., expanding outreach to non-law enforcement agencies, nongovernmental organizations, and other law enforcement agencies), pursuing multijurisdictional and even international trafficking in persons investigations and prosecutions, and establishing mechanisms for consistent communications and information sharing among agencies. Because trafficking victims are hidden and difficult to find, but are also the primary source of evidence of trafficking crimes, agency officials underscored the need to develop proactive approaches to identify trafficking victims in order to increase investigations and prosecutions. While current efforts to pursue trafficking crimes have drawn on the support of other federal agencies that do not have specific law enforcement functions or have benefited from the collaboration between law enforcement and nongovernmental organizations, officials expressed the desire to expand these efforts. For example, CRT/CS officials told us that they would like to prosecute more labor trafficking cases, but these situations were difficult to identify. While CRT/CS has worked with DOL’s Wage and Hour Division on trafficking cases, such as the Kil Soo Lee prosecution, they hoped to work with DOL to proactively identify potential trafficking situations, possibly during Wage and Hour’s self-initiated investigations of low-wage work sites. However, DOL officials said that to do so, the agencies would need to develop an approach that included regional planning and further training of DOL’s program managers. This level of collaboration and planning could benefit from mutually reinforcing strategies or a joint strategy to identify additional victims of labor trafficking. In addition, the main goal of ICE’s outreach efforts to state and local law enforcement, nongovernmental organizations, and foreign partners is identifying victims. While there are several jurisdictions across the country that are currently combating trafficking crimes in their communities, DOJ and DHS officials recognized the need to expand their outreach and training efforts to other law enforcement and non-law enforcement entities to identify victims and increase the number of investigations and prosecutions. Currently, coordination among agencies on training and outreach is largely episodic. However, developing collaborative outreach and training strategies to incorporate state and local law enforcement, nongovernmental organizations, and foreign partners, among others, could allow agencies to expand their efforts while making the best use of agencies’ resources. DOJ officials also told us that they hoped to expand federal antitrafficking efforts by pursuing multijurisdictional and international investigations and prosecutions. For example, CRT/CS officials told us that they were striving to enhance investigations and prosecutions of significant trafficking in persons and slavery cases, such as multijurisdictional cases and those involving financial crimes. To do so, CRT/CS has engaged in training activities for federal prosecutors across the country to institutionalize ways to combat trafficking and allow CRT/CS attorneys to focus on multijurisdictional cases. However, CRT/CS has also been actively involved in the training of investigators, task forces, and foreign officials, as well as carrying out their responsibilities to prosecute trafficking cases. Folding CRT/CS’s training and outreach efforts into a broader and more collaborative training and outreach strategy could disperse responsibility for training to other federal partners who are also engaging in training and outreach efforts. The DOJ officials identified the need to establish mechanisms for consistent communication and information sharing. While FBI officials said that case-by-case coordination between some field offices on individual trafficking cases was good, they said that there was also a lack of consistency in information sharing and communication among field offices. DOJ officials also cited the need to maintain information in a central repository to enhance tracking of the movements of traffickers and victims. For example, CEOS identified the lack of such a repository of information on trafficking and an institutionalized policy on information sharing as factors that can inhibit trafficking investigations. Working collaboratively with counterparts in the field and across agencies at the national level to establish mechanisms for consistent communication and information sharing could be incorporated into a strategic framework. Additionally, FBI and ICE officials pointed to the need to pursue information about trafficking organizations back to their country of origin and identify trafficking patterns in order to enhance efforts to dismantle trafficking organizations. However, HSTC officials told us that the intelligence community is not collecting as much information on trafficking as it is on other issues, such as human smuggling. HSTC officials also said that if HSTC could increase its analytical capability, it would be able to expand its current collection and dissemination of intelligence information on trafficking to develop more products and in so doing provide a more valuable resource to law enforcement and the intelligence community, among others. CRT/CS officials told us they are working with the FBI to obtain information that would help identify trafficking networks. With intelligence information from traditional intelligence sources being limited, agencies could work toward achieving their goal of tracking trafficking patterns and dismantling trafficking organizations by establishing collaborative practices to obtain needed information to support proactive investigations of trafficking crimes. A strategic framework could promote a collaborative effort to define and articulate a common federal outcome for investigations and prosecutions of trafficking crimes. Agencies have identified agency-level goals and proactive approaches to expand their current efforts to combat trafficking crimes, but none of these approaches is linked to a governmentwide outcome defined by the key federal agencies that investigate and prosecute trafficking crimes. Our previous work on effective interagency collaboration has demonstrated that having a clearly defined governmentwide outcome could help align specific goals across agencies. While National Security Presidential Directive 22 instructed federal agencies to develop and promulgate plans to implement the directive, agencies primarily developed lists of activities that indicated individual agency efforts, and the plans, taken together, did not cut across agency boundaries and lead toward a common governmentwide outcome. As we have illustrated in our work related to national strategies to combat terrorism, a governmentwide outcome could hinge on an ideal “end-state” followed by a logical hierarchy of major goals, subordinate objectives, and specific activities to achieve results. Gathering intelligence on traffickers, dismantling trafficking rings, increasing prosecutions, and rescuing victims can be activities linked to broader objectives to achieve a common outcome for investigations and prosecutions of trafficking crimes, but at this time, agencies have not collectively articulated what that outcome might be. The scope of U.S. governmentwide efforts to investigate and prosecute trafficking crimes can be linked to a common outcome to provide an accountability framework. Our prior work has shown that without a clearly defined outcome, it may be difficult to overcome significant differences in agency missions, cultures, and established ways of doing business. For example, pursuing trafficking investigations and prosecutions involves collaboration between law enforcement and nongovernmental organizations that typically do not work together. Identifying a unified federal outcome for investigations and prosecutions of trafficking crimes could help align the goals and sustain the support of these agencies and organizations, thereby enhancing investigations and prosecutions. Our work has shown that after identifying a common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or are joint in nature. Such strategies help in aligning the partner agencies’ activities, core processes, and resources to accomplish the common outcome. Some individual agencies have developed their own strategies to combat trafficking and implement the proactive approaches to expand current activities, but strategies have not been linked to a common governmentwide outcome for investigations and prosecutions of trafficking crimes. Since no single agency is undertaking these initiatives alone, mutually reinforcing strategies could help agencies better align their activities and resources to accomplish a common outcome. As federal agencies expand their approaches to investigating and prosecuting trafficking crimes, a strategic framework could assist in clarifying respective roles and responsibilities. Such a framework could be important to ensure that agencies understand who will do what and help to reconcile differing perceptions of leadership that exist among the agencies on combating trafficking crimes. Our prior work has shown that generally agencies can enhance their collaboration by working together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. Nonetheless, existing interagency collaborative mechanisms are not positioned to support the greater collaboration needed to coordinate expanded U.S. efforts to investigate and prosecute trafficking in persons. The Interagency Task Force to Monitor and Combat Trafficking, the SPOG, and SPOG working groups facilitate governmentwide policy on human trafficking. However, operational coordination on investigations and prosecutions of trafficking in persons rests with criminal justice personnel and currently occurs on a case-by-case basis. HSTC is an information clearinghouse and facilitates information sharing among investigative and prosecutorial agencies working on trafficking. HSTC is also available to assist agencies to avoid duplication of efforts by querying an array of participating agency databases to determine if more than one agency has an ongoing interest or open investigation on a specific target. The Trafficking in Persons and Worker Exploitation Task Force was involved in both policy and operations, but at the time of our review, DOL told us it understood that the task force was no longer functioning, and CRT/CS officials said they were in the process of reinvigorating DOJ’s relationship with DOL on this issue. Furthermore, developing a strategic framework could help reconcile differing perceptions of who is in charge in coordinating antitrafficking investigations and prosecutions. Specifically, CRT/CS and investigative agencies perceived the interagency leadership role in pursuing trafficking crimes differently. CRT/CS officials told us that its newly formed Human Trafficking Prosecution Unit was positioned to take the leadership role in coordinating trafficking efforts across the federal agencies because investigative agencies historically work with CRT/CS prosecutors to complete cases. While FBI officials acknowledged CRT/CS as the leader on trafficking in persons, they also said that leadership needs to cut across agencies, since no one agency carries out trafficking cases alone. ICE officials said that agencies are all equal partners in the effort to combat trafficking and that while CRT/CS may take the lead on prosecutions, the investigative agencies each take the lead on their own investigations, or work jointly on joint investigations, until they are handed to prosecutors. ICE officials also did not perceive the need for leadership beyond the SPOG for U.S. policy on trafficking, but acknowledged that the SPOG did not have oversight for investigations and prosecutions because of law enforcement sensitive matters. ICE officials suggested that should a problem with investigations and prosecutions arise, the SPOG could create a subcommittee to deal with these issues. However, according to DOJ officials, because investigative and prosecutorial agencies are governed at the operational level by confidentiality rules (e.g., grand jury secrecy) and limitations on sharing law enforcement sensitive information, the SPOG or its working groups were not appropriate vehicles for leading collaborative operational efforts to investigate and prosecute trafficking in persons. Since no one agency will be able to accomplish the steps identified to further U.S. efforts to combat trafficking on its own, collaboration among agencies will need to go beyond the current case-by- case coordination and views on leadership. Our prior work has shown that a strategic framework could also foster efforts to devise compatible standards, policies, procedures, and information systems that will be used in collaborative efforts for a range of topics across federal agencies. As agencies move forward in their efforts to expand current activities to investigate and prosecute trafficking crimes, such as tracking trafficking cases or addressing the limitations posed on current coordinating mechanisms, agencies could work jointly and consult with other stakeholders to determine what information on trafficking could be collected and shared as policies and procedures for developing information systems are being planned and created. Additionally, agencies working together to establish policies and procedures to provide guidance on how to achieve maximum coordination and cooperation across agencies to investigate and prosecute trafficking crimes, including the exchange of information, would address current inconsistencies that exist among the field offices of federal investigators. To help coordinate U.S. efforts to identify trafficking victims, get needed services to victims of trafficking, and investigate and prosecute trafficking in persons crimes in communities across the country, BJA established a program to fund state and local law enforcement human trafficking task forces. Each task force was to develop a strategy to raise public awareness, identify more victims, and establish protocols among government agencies and service providers and to meet related performance measures. Since 2004, BJA has awarded grants of up to $450,000 for a 3-year period to each of 42 task forces. BJA reported using its general funds to support some technical assistance to the task forces (e.g., sponsoring the development of a train-the-trainer curriculum on human trafficking and funding a national conference) and taking further steps to help respond to task force technical assistance needs. However, task force members we contacted and DOJ officials pointed to continued and additional technical assistance needs. BJA does not have a technical assistance plan for its human trafficking grant program. Our previous work has shown the need for agencies that administer grants or funding to state and local entities to implement a plan that focuses technical assistance and training efforts on areas of greatest need. BJA officials told us that they recognized the need for a technical assistance plan for its human trafficking initiative and had begun to prepare a plan to provide additional and proactive technical assistance to the task forces. In 2004, BJA established a program to fund state and local law enforcement human trafficking task forces to help support U.S. efforts to identify trafficking victims and investigate and prosecute trafficking in persons crimes in communities across the country. Working with OVC, which was already providing assistance to victim service providers serving trafficking victims, BJA solicited applications from state and local law enforcement for fiscal year 2004, and then again for fiscal years 2005 and 2006. Each task force was to develop a strategy that included the following: (1) a memorandum of agreement outlining the respective roles and responsibilities of the participating agencies and ensuring coordination and involvement of the local U.S. Attorney; (2) training materials for first responding officers and investigators, including written protocols and resource manuals to enhance coordination and information/resource sharing among law enforcement and victim service providers to identify and assist human trafficking victims; (3) distinct protocols for resource referral and service provisions for U.S. versus alien victims of human trafficking; and (4) definition of the role of law enforcement and service provider partners in training others in the community. The task forces were to meet specific program goals and performance measures focused on identification of and assistance to victims, training of law enforcement in the identification of victims, public awareness and outreach, and identification and collaboration with community stakeholders. Grantees were required to collect and report data on performance measures, including the number of potential and assisted trafficking victims; DHS applications made to obtain trafficking victims’ benefits; law enforcement personnel and others trained; presentations given to law enforcement and the general public; service providers, community support groups, and community education groups identified; and memorandums of agreement signed. Under its human trafficking task force initiative, BJA has funded a total of 42 law enforcement task forces on human trafficking—22 in fiscal year 2004 and 10 in each of fiscal years 2005 and 2006. Each task force grant award was for up to $450,000 for a period of 3 years. BJA reported awarding a total of $17,324,182 to the 42 task forces. The core membership of each task force includes federal, state, and local law enforcement; the U.S. Attorney’s Office; and nongovernmental organizations. However, the task forces vary, as evidenced by those we contacted, with respect to which federal agencies participate—FBI, ICE, DOL, or others; the number of state or local law enforcement agencies involved—a single or multiple police departments and sheriff’s offices; and the number of nongovernmental groups. As shown in figure 3, the 42 task forces are located in 25 states, two territories, and the District of Columbia. To support its grant programs, BJA can provide technical assistance to any justice-related state, tribal, or local agency or organization through on-site and off-site technical assistance; peer-to-peer information exchange and mentoring; publication drafting and dissemination; information sharing; aid with developing conferences, workshops, and training events; and curriculum development. According to BJA officials, technical assistance is available to human trafficking task forces, but BJA did not receive any specific funds to support its technical assistance to the human trafficking law enforcement task forces. BJA reported using $1,433,000 of its general funds to finance the development of a train-the-trainer curriculum on human trafficking, deliver training sessions using the curriculum, and fund the national conference on human trafficking held in New Orleans in October 2006. The train-the-trainer curriculum, prepared by the Institute for Intergovernmental Research to promote law enforcement awareness of human trafficking in the United States, was completed in October 2004. The curriculum included CD-ROMs with PowerPoint slides, instructor notes, and lists of additional resources. It addressed the following topics: introduction to human trafficking; legal overview; investigative considerations, including investigative techniques for trafficking cases; the roles of victim service providers in trafficking cases; immigration issues; interagency cooperation; and engaging the community. The curriculum was used to train trainers, including task force members, at BJA- sponsored train-the-trainer sessions held in California, Florida, and Illinois between November and April 2005, and a Human Trafficking Conference in Houston, Texas, in February 2005. According to BJA, some task force members attended the sessions and all 22 task forces funded at that time were represented at the Houston conference. The trainers were to use the curriculum to train law enforcement in their respective communities. BJA worked with other DOJ components, DHS, and DOL, among others, to put on the national trafficking conference in New Orleans. The plenary and breakout sessions provided information on various aspects to trafficking—investigative strategies, victims services, and interviewing witnesses, among others. According to DOJ officials, sessions were specifically held for the task forces in addition to the public conference program. During these sessions, task force participants discussed such issues as collaboration and reporting progress using BJA’s performance measures. In addition, BJA reported further steps taken to respond to the technical assistance needs of the task forces. According to BJA officials, task force grantees could request technical assistance by submitting the form found on the BJA Web site. BJA also reported if the data submitted by a task force in its semiannual report indicated the existence of performance problems, BJA would make routine calls to the particular task force to help resolve the issues or obtain additional information so that BJA could work with CRT/CS, OVC, or the appropriate U.S. Attorney’s Office on these matters. Also, having recognized that some task forces were experiencing difficulties in collecting and reporting data on its performance measures (e.g., identifying the number of trafficking victims), BJA sponsored a special session on this topic during the New Orleans conference. According to BJA officials, after the conference it distributed to the task forces the materials used during the session. Furthermore, between 2006 and 2007, BJA, sometimes working in conjunction with OVC, conducted site visits to 8 of the 42 task forces. The site visits provided the opportunity for BJA to identify challenges task forces were having, such as developing or implementing training for law enforcement, that might be addressed through training or technical assistance. In addition, CRT/CS reported that, in coordination with BJA, its attorneys had provided technical assistance and training to all but 8 of the task forces. DOJ officials and task force members we interviewed identified continuing and additional task force technical assistance and training needs. BJA said that it was aware of this need from weekly phone conversations with task force members; site visits to task force jurisdictions; and conversations with U.S. Attorneys, CRT/CS, and OVC. Continuing and additional technical assistance needs identified by DOJ officials and task forces we contacted included (1) substantive training about trafficking crimes and trafficking victims and (2) technical assistance and training to help task forces develop the components in their strategies required under their grants. DOJ officials and members of task forces we contacted suggested a range of training on substantive topics related to human trafficking. They acknowledged that there would always be a need for basic training on trafficking issues, as new task forces were formed, existing task forces reached out to new participants, and individuals participating in the task forces changed over time. In addition, to enhance the capacity of the task forces to support investigations and prosecutions of trafficking crimes, they identified the need for advanced training on such topics as seizing and forfeiting traffickers’ assets, techniques to facilitate law enforcement and nongovernmental organizations working together to interview trafficking victims, and techniques for interviewing child victims of sex trafficking. To expand their ability to identify more trafficking victims, DOJ officials and some task forces we contacted pointed to the need for training of other agency personnel, such as other law enforcement, hospital workers, and social services personnel, who in the course of their jobs might come into contact with trafficking victims. They also indicated that it was sometimes necessary to tailor training and technical assistance to specific populations. For example, training could be focused on potentially vulnerable populations within the community where a task force was located (e.g., farm labor, restaurant workers, domestic service workers, alien victims, and U.S. children trafficked for commercial sex) or trafficking populations that have typically been more difficult to find, specifically victims of labor trafficking. By requiring each task force grantee to lay out a strategy to raise public awareness of trafficking, identify more victims, and establish protocols among government agencies and service providers, BJA demonstrated its awareness of the need for the task forces to have a mechanism to coordinate activities and operations in order to achieve program goals. Task force members we interviewed provided examples of the challenges they had confronted in addressing the various elements of their task force strategy. For example, some task force members said that after 2 years of BJA funding they were still trying to iron out protocols covering roles and responsibilities; experiencing tensions among key players on the task force, including nongovernmental organizations; or relying on informal contacts based on who knew whom or pre-established relationships among task force members (e.g., local law enforcement and FBI) rather than on positions or protocols. Members from one task force we interviewed even held different opinions regarding its protocols. The task force leader attributed the task force’s success to its informal protocols. By contrast, another task force member told us that the protocols, which had not been developed in consultation with task force members, were merely guidelines and led to victims falling through the cracks because of the lack of standard services. Examples of types of possible technical assistance needs suggested by the task forces we contacted included (1) ways to improve communication and sharing information/intelligence between and among entities, including e-mail lists, a secure Web site, and training bulletins; (2) standardized protocols that outline roles and responsibilities of each member agency, which the task forces can adapt for their own jurisdictions; (3) help in strategizing; (4) regional and national meetings that bring smaller groups of task forces together; (5) interpreters/cultural assistance; and (6) safe and secure housing for victims. BJA does not have a technical assistance plan for its human trafficking grant program. Our previous work on federal agencies’ administration of grants or funding to state and local entities has shown the need for agencies that administer grants or funding to state and local entities to implement a plan that focuses technical assistance efforts on areas of greatest need. BJA told us that it was developing a plan to provide additional and proactive technical assistance to the task forces. It said the plan would address developing BJA’s capability to provide technical assistance as needed, identifying model task force leaders who could provide some technical assistance to other task forces, and establishing a means to ensure communication among the task forces. Officials said that they were working with OVC to develop an approach that would meet the needs of BJA and OVC human trafficking grantees. However, BJA reported that the development and review of the plan had been delayed pending final decisions on DOJ’s funding for fiscal year 2007. As part of its plan, BJA might address outreach needs to ensure that task forces are aware of BJA’s capacity to provide or facilitate the obtaining of technical assistance and training. DOJ and DHS officials emphasized the importance of the task forces to the overall U.S. effort to investigate and prosecute trafficking in persons. Working within communities, task force members are usually best situated to identify trafficking victims and crimes. Representatives of some of the task forces we contacted were not aware of BJA’s capacity to respond to technical assistance needs. Accordingly, identifying steps needed to disseminate information on the types of assistance and training available is a necessary component of a technical assistance plan for these task forces. Also, BJA might incorporate into its plan a systematic assessment of its performance measures for the task forces. BJA reported that it collated and analyzed the performance data it received, would make routine calls to the particular task force to help resolve the performance issues, or obtain additional information to assist a task force in addressing a problem. However, systematically assessing task force reports on BJA performance measures could help BJA to identify common problem areas in collecting and reporting performance data. It could also provide BJA with the means to determine which measures might need to be modified or how BJA might enhance its measures to enable it to assess the impact of task force efforts. Such an approach should help BJA to facilitate the task forces’ meeting the program’s overall goals and objectives of identifying victims and supporting investigations and prosecutions. In addition, through its technical assistance plan, BJA might identify steps to obtain information from the task forces on areas for continuous improvement. This information could be used to determine common and emerging technical assistance or training needs, approaches for meeting those needs, and how best to provide that assistance. As part of its plan, BJA could also develop other means and mechanisms for providing technical assistance to the task forces effectively and efficiently. For example, as suggested by some of the task force members we contacted, a secure Web site could provide a means for task forces to share best practices, readily obtain samples of protocols or other documents, or ask for peer-to-peer assistance from other task forces. BJA could also use the Web site to disseminate information to the task forces. BJA’s plan might also include a component for assessing the quality of its technical assistance. To ensure that the technical assistance and training provided to the task forces meet their needs, BJA might request information from the task forces on technical assistance and training provided to them, including evaluations of that assistance. Such information could help BJA demonstrate what it has done to support the task forces and the effectiveness of those efforts in meeting task force needs. This information could also be used to ascertain necessary modifications of or changes to technical assistance to better meet task force needs. To facilitate BJA’s technical assistance to the task forces, the plan might identify available technical assistance and training resources from a variety of sources. BJA could then match a particular task force’s needs with technical assistance and training that might be provided by other federal agencies, such as CRT/CS, or other task forces. While such training and technical assistance are currently provided on a case-by-case basis, within the context of a plan, BJA could more systematically galvanize these resources, incorporate them into its overall approach to meeting the task forces’ needs, and assess their impact on task force efforts. Information on task force training needs could also be used to help BJA, working with other federal agencies, to plan the content and format of the legislatively mandated 2007 and 2008 national trafficking conferences so that it meets the range of training and technical assistance needs for experienced task forces as well as new task forces. Federal agencies have made strides in several areas to combat trafficking crimes and to coordinate their efforts on a case-by-case basis. This approach has generally led to an increase in the number of investigations and prosecutions since the passage of the TVPA in 2000. However, as agencies look ahead to broadening their efforts while still maintaining coordination on individual cases, strategic planning will be necessary to ensure agency resources are being expended with the greatest return on investment. Defining a common governmentwide outcome for investigations and prosecutions of trafficking crimes, reconciling roles and responsibilities, and ensuring consistent communication and information sharing are vital to the investigation and prosecution of trafficking crimes. Yet no such outcome has been collaboratively defined by the agencies, perceptions of leadership differ among agencies, and policies are not in place to ensure consistent communication and information sharing. Furthermore, to sustain a coordinated victim-centered approach to combating trafficking, agencies must continue to educate and engage their own personnel, as well as supporting partners in the effort to combat this crime, such as state and local law enforcement, nongovernmental organizations, non-law enforcement agencies, and citizens. As our prior work on multi-agency collaboration has shown, a strategic framework that includes elements such as defining a common outcome, establishing mutually reinforcing or joint strategies, and agreeing on roles and responsibilities, among others, is particularly useful in addressing problems that are national in scope and involve multiple agencies with varying jurisdictions. Such an approach allows for the necessary flexibility and incorporation of different types of collaborative mechanisms to address the complexities of and unique challenges posed by such problems. Working in a more strategic fashion, agencies could build on their current cooperative relationships to establish a strategic focal point, ensure consistency of communication and partnerships, and sustain and expand a coordinated effort to investigate and prosecute trafficking in persons crimes. BJA’s competitive grant program has funded state and local law enforcement human trafficking task forces to support U.S. efforts to identify trafficking victims and investigate and prosecute trafficking crimes. Given its mission to support state and local law enforcement, BJA has provided some training and technical assistance to the human trafficking task forces, sometimes through coordinated efforts with other agencies. However, the task forces we interviewed identified challenges they faced in implementing BJA’s strategic planning requirements and carrying out their responsibilities, especially in identifying potential victims and establishing partnerships with key players. Our previous work on federal agencies’ administration of grants or funding to state and local entities shows the importance of implementing a technical assistance plan that focuses the training and technical assistance efforts by agencies that administer grant funding. In the absence of such a plan, BJA may find it difficult to target technical assistance to the task forces most in need and ensure that task forces receive the technical assistance needed to meet the strategic planning requirements and performance measures outlined in the human trafficking task force grant solicitation. Implementing such a plan will help BJA focus its efforts, enabling BJA to better ensure that its efforts meet the needs of the task forces, achieve the objectives of the program, enhance collaboration across levels of government and between government and nongovernmental entities, and ultimately support U.S. efforts to investigate and prosecute trafficking in persons. To help ensure that the U.S. government maximizes its ability to enforce laws governing trafficking in persons, we recommend that the Attorney General and the Secretary of Homeland Security, in conjunction with the Secretaries of Labor, State, and other agency heads deemed appropriate, develop and implement a strategic framework to coordinate U.S. efforts to investigate and prosecute trafficking in persons. At a minimum this framework should a. define and articulate a common outcome; b. establish mutually reinforcing or joint strategies; c. agree on roles and responsibilities; and d. establish compatible policies, procedures, and other means to operate across agency boundaries. To better support the federally funded state and local human trafficking task forces, we recommend that the Attorney General direct the Director of the Bureau of Justice Assistance to develop and implement a plan to help focus technical assistance on areas of greatest need. We requested comments on a draft of this report from the Attorney General, the Secretary of Homeland Security, the Secretary of State, and the Secretary of Labor. DOJ and DHS provided written comments, which are summarized below and included in their entirety in appendixes V and VI, respectively. In addition, these agencies and DOS and DOL provided technical comments, which we incorporated as appropriate. DOJ agreed with the contents of the report. Regarding our recommendation to the Attorney General and the Secretary of Homeland Security to develop a strategic framework to coordinate U.S. efforts to investigate and prosecute trafficking crimes, DOJ acknowledged that continued and increased collaboration could further efforts to investigate and prosecute trafficking in persons crimes. DOJ further noted that it is already pursuing a variety of such methods, including establishing the Human Trafficking Prosecution Unit and holding collaborative meetings and training sessions with its partners. As a result, DOJ proposed that the report identify the need for continued collaboration but not mandate one particular collaborative model. It was not our intent to prescribe a particular structure or collaborative model. We recognize that because of the unique challenges posed by trafficking in persons investigations and prosecutions, the proposed framework needs to be flexible. Our previous work has shown that the four elements outlined in our recommendation—a common outcome; mutually reinforcing or joint strategies; agreed-on roles and responsibilities; and compatible polices, procedures, and other means to operate across agency boundaries—are key to an effective strategic framework. However, the specifics of each of these elements, additional elements to be included in a strategic framework for the investigation and prosecution of trafficking crimes, and the structures for developing and implementing this framework would be determined by the agencies involved. In response to DOJ’s comments, we have included language in our report that reinforces the need for flexibility in developing and implementing a strategic framework for investigations and prosecutions of trafficking in persons. Commenting on our recommendation that the Attorney General call on the Director of BJA to develop and implement a plan to help focus technical assistance to the human trafficking task forces, DOJ stated that to address the areas of task force technical assistance needs raised in our report, BJA and OVC planned to collaboratively develop and lead a facilitated working group, including representatives from these agencies, ICE, HSTC, DOL, and other DOJ components, by October 1, 2007. The working group is to provide input into BJA’s collaborative outreach and improve training and technical assistance strategies to address issues raised in the report. DOJ enumerated the elements that its training and technical assistance plan was expected to include, such as a strategy for informing task force members, on a continuous basis, of the availability of training and technical assistance resources; a systematic assessment of performance measures; and methods to assess the quality of training and technical assistance. DHS generally agreed with the contents of the report. Specifically, DHS said that the report reflected an overall understanding of the complexities of the antitrafficking response; ICE’s efforts in leading investigations, conducting outreach, and responding to trafficking victims; and properly characterized ICE’s compliance with National Security Presidental Directive 22. In response to the report’s discussion of interagency coordination and strategizing, DHS noted that ICE regularly conducted strategic planning with its partners, particularly in the field; worked with federally funded state and local trafficking task forces; and contributed to annual trafficking reports prepared by DOJ. Moreover, DHS maintained that interagency coordination through the SPOG ensured that trafficking policies and guidelines were carried out, and therefore ICE believed that a governmentwide framework or strategy was not needed. Our report acknowledged that the SPOG and its working groups help to facilitate coordination of governmentwide policy on human trafficking. However, the focus of our work was U.S. efforts to investigate and prosecute trafficking in persons crimes, the coordination of which rests with criminal justice personnel, primarily DHS and DOJ. Given DOJ and DHS senior officials’ acknowledgment of the need to expand the scope of U.S. efforts to investigate and prosecute trafficking in persons and our finding that existing mechanisms and individual agency plans did not address the interagency collaboration needed to support this expanded level of effort, we recommended the development of a strategic framework for coordinating U.S. efforts to investigate and prosecute trafficking cases. Commenting on this recommendation, DHS said that ICE would support such a framework if certain considerations were taken into account. For example, DHS noted that mutual goal setting might be possible so long as the goals contained objectives that specifically addressed unique agency capabilities in combating trafficking. DHS also noted that any framework would also need to recognize that agencies’ roles in a particular case would vary by available resources, local priorities, and the nature of the case and investigation. Agency resources for policy efforts and initiating any recommendations that arose from the framework would also be critical. GAO would expect that in developing and implementing such a framework for investigations and prosecutions of trafficking crimes, the agencies involved would determine how to address varying authorities, respective resources, and other relevant factors. We will send copies of this report to the Attorney General, the Secretary of Homeland Security, the Secretary of State, and the Secretary of Labor, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web Site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. To ascertain the status of U.S. efforts to investigate and prosecute trafficking crimes, this report discusses (1) key activities federal agencies have undertaken to combat trafficking in persons crimes, (2) federal efforts to coordinate on investigations and prosecutions of trafficking in persons crimes and whether these efforts might be enhanced, and (3) how the Bureau of Justice Assistance (BJA) supported federally funded state and local human trafficking task forces and whether these efforts might be improved. This review is part of a larger body of work that you requested on U.S. efforts to combat trafficking in persons, here and abroad. To determine key activities federal agencies have undertaken to combat trafficking in persons crimes, we reviewed pertinent documents and interviewed officials from the Department of Justice (DOJ), including the Federal Bureau of Investigation (FBI), Civil Rights Division/Criminal Section (CRT/CS), Criminal Division/Child Exploitation and Obscenity Section (CEOS), and the Executive Office for U.S. Attorneys; the Department of Homeland Security’s (DHS) U.S. Immigration and Customs Enforcement (ICE) and U.S. Citizenship and Immigration Services; the Department of Labor’s (DOL) Wage and Hour Division; the Department of State’s (DOS) Bureau of Diplomatic Security and Office to Monitor and Combat Trafficking in Persons; and the Human Smuggling and Trafficking Center (HSTC). We obtained and analyzed written responses to questions we provided, departmentwide strategic planning documents, agency plans, strategies, and memorandums and guidance on efforts to combat human trafficking. We obtained examples of training materials used to train investigative agents and to conduct outreach and attended the national human trafficking conference in New Orleans in October 2006. From the FBI, ICE, CRT/CS, and CEOS, we obtained and analyzed relevant data on the cases investigated and prosecuted, including numbers of cases, defendants charged, and convictions, as well as, where possible, estimates of the resources used to do so. We discussed the sources of these data with federal agency officials to determine that these data were sufficiently reliable to show trends in agencies’ activities undertaken to investigate and prosecute trafficking crimes. We did not seek data prior to the passage of the Trafficking Victims Protection Act in 2000 for investigative agencies since the establishment of the Department of Homeland Security transferred some human trafficking investigative duties from DOJ’s legacy Immigration and Naturalization Service to DHS’s Immigration and Customs Enforcement. To determine what efforts federal agencies have undertaken to coordinate investigations and prosecutions of trafficking in persons crimes and whether these efforts might be enhanced, we reviewed pertinent documents, such as agency reports, strategies, and memorandums to field offices. We interviewed officials from DOJ headquarters, including FBI, CRT/CS, CEOS, and the Executive Office for U.S. Attorneys; DHS’s ICE and U.S. Citizenship and Immigration Services; DOL’s Wage and Hour Division; DOS’s Bureau of Diplomatic Security and Office to Monitor and Combat Trafficking in Persons; and the HSTC. We gathered and analyzed information from selected field personnel representing the FBI, ICE, and local U.S. Attorney’s Offices. To ascertain how efforts to combat trafficking might be enhanced and identify applicable criteria to be used in our analysis, we consulted our prior work on agency collaboration, international crime, terrorism, organized crime, and the illegal importation of prescription and illegal drugs. We also interviewed agency officials to identify challenges they face in the investigating and prosecuting of trafficking crimes and to identify what elements may enhance their efforts. To assess how BJA has supported federally funded state and local human trafficking task forces and whether these efforts might be enhanced, we obtained and analyzed relevant documents from BJA, including task force grant proposals, grant reports written by program managers, performance measurement data, and information from its Web site. We interviewed BJA, Office for Victims of Crime (OVC), and CRT/CS officials on the origin of the task forces, when task forces were funded, and the types of assistance provided to the task forces. We also interviewed field personnel from the FBI, ICE, and DOL to determine what federal supports had been given to the task forces we selected for either site visits or telephone interviews. We developed case studies of seven task forces to provide us with in-depth knowledge about how the task forces are functioning, how they are working together, and what supports and technical assistance they have been provided. Gathering information for the case studies included site visits to task forces in Collier County, Florida; Los Angeles, California; and Washington, D.C.; and telephone interviews with key participants from task forces in Houston, Texas; Hawaii; and Nassau and Suffolk Counties in New York. In selecting the 7 task forces we contacted, we limited our selection to the longest-running task forces (i.e., the 22 founded in fiscal year 2004)—those that had had an opportunity to be established. From this group, we tried to include task forces located in various U.S. geographic regions, and with a primary focus on either sex trafficking, labor trafficking, or both sex and labor. To ensure that we included task forces of varying performance levels, we asked officials from BJA and CRT/CS for recommendations on task forces that were performing well. In addition to these recommendations, we also used BJA performance measures such as number of victims found and the number of continued presence visas provided to make our selections. As a part of our site visits or through telephone interviews, we interviewed the task force leader, the Assistant U.S. Attorney (who may or may not the leader), the primary local law enforcement contact, the dominant nongovernmental organization participant, and FBI and ICE representatives on the task force. BJA was not able to provide us with a list of task force participants, but a nongovernmental organization, Polaris Project, affiliated with the Washington, D.C., human trafficking task force, had networked with all the BJA-funded task forces at the federally sponsored human trafficking conference in New Orleans in October 2006, and provided us with a list identifying the “key players.” From this list, we developed our list of interviewees based on our inclusion criteria. The FBI and ICE participant names were provided to us through the liaisons in each agency. Overall, through site visits and telephone interviews we interviewed a total of 50 task force members. We interviewed 13 Assistant U.S. Attorneys, 7 local law enforcement, 6 FBI participants, 5 ICE agents, 1 DOL participant, 13 nongovernmental organization participants, and 3 task force leaders from an Attorney General’s office. In addition, we interviewed a U.S. Attorney, who had recently set up a human trafficking task force, to obtain his perspective on challenges faced in putting a task force into operation. This approach does not allow for generalizing. In addition, we reviewed relevant GAO reports on federal agencies’ administration of grants or funding to state and local entities. We conducted our work from June 2006 through June 2007 in accordance with generally accepted government auditing standards. During the 1990s, the United States began to take steps to address trafficking in persons at home and abroad. DOJ prosecuted trafficking cases under several federal criminal statutes, including the involuntary servitude statutes, the Mann Act, and labor laws on workplace conditions and compensation. However, various U.S. policymakers determined that existing U.S. statutes did not take into account some characteristics of contemporary trafficking in persons and, therefore, did not adequately protect trafficking victims, deter trafficking, and bring traffickers to justice. These statutes did not always treat trafficked persons as victims. Involuntary servitude was restricted to cases of physical abuse—force, threats of force, or threats of legal coercion, as opposed to the psychological coercion often used by today’s traffickers. While the modern concept of trafficking in persons focused on compelled service, under the Mann Act trafficking was perceived as interstate transportation for prostitution. Moreover, these statutes scattered enforcement authority across the government and resulted in different case outcomes, depending on the charges brought or which agency learned of the allegations of abuse. The TVPA addressed limitations in existing law that made it difficult to prosecute traffickers, as well as adding new crimes and enhancing the penalties. Federal agencies continue to rely on a number of statutes to prosecute traffickers and halt their operations. Table 1 lays out the primary statutes that support the investigation and prosecution of trafficking in persons crimes. Traffickers may also be charged with other offenses. Examples of these statutes are shown in table 2. This appendix provides additional data on federal agencies’ efforts to investigate and prosecute trafficking in persons crimes. It also presents available information on federal agency resources used to support these efforts. The FBI, ICE, and CRT/CS reported data on the investigations, prosecutions, indictments, and arrests related to trafficking crimes since the passing of the TVPA. These data are a general indicator of the level of agency effort on trafficking in persons, although they are limited by a number of factors. Because trafficking in persons is a hidden crime and victims are hesitant to come forward, it is difficult to estimate the extent of trafficking in persons crimes. Moreover, because prosecutors may charge traffickers with other crimes (e.g., kidnapping, the Mann Act, immigration violations, or money laundering) for strategic or tactical reasons, data on the number of trafficking in persons investigations and prosecutions do not provide a complete picture of the number of traffickers who have been thwarted. The data systems agencies use are primarily case management systems, which may not be able to extract trafficking data if trafficking was not listed as a charge. Additionally, if an investigation on smuggling later reveals a trafficking violation, some data systems will continue to store investigative data under the smuggling classification. The complexity of the investigations and the limitations of data systems make providing data on human trafficking a labor-intensive effort for agencies. Therefore, these data are not comparable across the agencies and it is not possible to associate arrest and indictment data with a particular case because of differences in agency data systems. Moreover, agency officials noted that investigations do not always lead to prosecutions, because situations that appear to be trafficking may prove to be alien smuggling or prostitution accompanied by abuse and, therefore, do not meet the criteria to be prosecuted as trafficking cases. In addition, ICE officials said that in situations involving children, the agency’s priority was to rescue the victim whether or not the investigation led to the prosecution of the trafficker. Since fiscal year 2001, CRT/CS has reported an overall increase in the number of prosecutions for cases involving sex and labor trafficking, as defined by CRT/CS, based on the facts of the case. Table 3 shows the increase in number of prosecutions after the implementation of the TVPA in 2001, compared to those in the years leading up to the law’s passage. CRT/CS officials noted that the number of defendants in each case varied, as well as the number of victims, and the complexity of the case (app. IV provides summaries of several cases to illustrate this variation.). The data since fiscal year 2001 related to investigations of trafficking in persons provided by the FBI and ICE is shown in tables 4 and 5, and also shows a general increase. As with the prosecutions of human trafficking cases, variation in numbers from year to year may be due to the complexity of a case. For example, factors such as a case with many victims, multiple defendants, a long period of victimization, and multiple jurisdictions from which to collect evidence may affect how many cases are able to be investigated from year to year. Additionally, FBI’s Crimes Against Children Unit reported data on cases of trafficking of U.S. children for commercial sex from its Innocence Lost National Initiative, as shown in table 6. DOL’s Wage and Hour Division reported participating in four cases involving criminal or potentially criminal allegations of trafficking in persons, which were concluded in fiscal year 2007. The division reported seven cases currently under investigation; seven cases at some stage of litigation or case development by the FBI, Assistant U.S. Attorney, or others; and one additional case in which it will be providing technical assistance following direct law enforcement action. According to the division, its involvement may have been as a result of a referral from another agency (e.g., FBI, Assistant U.S. Attorney, or local law enforcement), a referral from an advocacy organization, or a situation in which the division was the initial investigating agency. In addition to participation in cases involving violations of the trafficking statutes, the division has also assisted other law enforcement agencies in developing investigations or prosecutions of criminal violations of other statutes and may pursue criminal penalties under its own statutes. For example, according to CRT/CS, DOL has been involved in the calculation of back wages and overtime pay for victims, as in United States v. Calimlim. According to DOL, it provided technical advisory assistance to the prosecuting U.S. Attorney, furnishing sample back wage computations that would have been due had the victim fallen under the provisions of the Fair Labor Standards Act (FLSA), and had the case events occurred within the FLSA statute of limitations. In the subsequent prosecution, CRT/CS successfully secured a $940,000 restitution order. To implement their respective plans and carry out activities related to the investigation and prosecution of trafficking in persons, agencies have generally drawn from existing resources. Therefore, according to DHS and DOJ officials, resource information may not be distinguishable from other activities and is generally an estimate. Information is also not consistent across agencies. Although the 2005 TVPA amendments authorized appropriations of $18,000,000 in fiscal years 2006 and 2007 to ICE and $15,000,000 in fiscal year 2006 to the FBI for trafficking investigations, these amendments were enacted after fiscal year 2006 had already begun and the amounts were not appropriated. ICE reported 53 full-time equivalents for fiscal year 2005, 68 in fiscal year 2006, and 32 through the first half of fiscal year 2007 for trafficking activities. In midyear 2003, ICE received $3.7 million in supplemental funding, which mostly funded law enforcement operations to enforce the TVPA and domestic and overseas training activities. FBI officials told us they had not received a separate appropriation specifically for trafficking in persons. The FBI Civil Rights Unit reported as of April 2007, 141 Special Agents are allocated to its Civil Rights Program throughout 56 field offices. One Unit Chief, six Supervisory Special Agents, and eight support staff are assigned to headquarters. For fiscal year 2006, approximately 24 percent of these resources were directed toward human trafficking matters. In fiscal year 2006, the FBI’s Crimes Against Children Unit received $500,000 from the Assets Forfeiture fund to support task forces and working groups investigating trafficking of U.S. children for commercial sex. The funds were used for overtime pay for state and local officers, equipment, and training. Additionally, to support the Innocence Lost National Initiative, the FBI received 16 positions (10 agents and 6 analysts) in fiscal year 2005, and 10 agent positions in fiscal year 2006. The FBI said it requested 30 investigative, clerical, and analytical personnel to support the Crimes Against Children program initiatives for fiscal year 2008, including combating trafficking of U.S. children for commercial sex. In addition, the conference agreement for the fiscal year 2007 DHS appropriation designated $1 million to ICE for its contribution to the Human Smuggling and Trafficking Center (HSTC). HSTC officials said although these funds were not designated specifically for trafficking in persons, they would assist HSTC’s trafficking efforts. Furthermore, because ICE was the only agency with funds specifically designated for HSTC, it would henceforth take on the responsibility for up-front administrative expenses at HSTC, for which other agencies, including DOS and DOJ, would then reimburse ICE. CRT/CS also reported that it had not received funds specifically designated for human trafficking prosecutions, but provided us with estimates of the numbers of positions, attorneys, and full-time equivalents for trafficking in persons. CRT/CS further noted that the actual number of positions is very difficult to track, because, as is true for all enforcement areas within the Criminal Section, most attorneys do not work exclusively on trafficking in persons, but carry other criminal enforcement cases as well. CRT/CS training, outreach, and technical assistance on trafficking in persons have also been provided from its operating funds. However, CRT/CS developed and provided us with estimates of various types of resources it used to address trafficking in persons, as presented in table 7. DOJ’s fiscal year 2008 budget submission included a request for a CRT/CS program increase of $1,713,000, 13 agent/attorney positions, and 7 full-time equivalents for its trafficking efforts. According to CEOS, prosecuting sex trafficking and sex tourism cases can be enormously resource intensive, especially if foreign victims or investigators will be needed to testify at trial. As trafficking crimes were not a line item in the appropriation legislation, CEOS could not provide actual data on the resources used to prosecute these crimes. However, CEOS estimated that it has devoted approximately 15 to 25 percent of its attorney time to trafficking crimes since 2003. FBI and CEOS officials noted the lack of facilities for these victims, who need special treatment. The TVPA authorized the Attorney General to make grants to develop, expand, or strengthen victim service programs for victims of trafficking. DOJ received approximately $10 million per year in fiscal years 2002 through 2006 for victim services programs for victims of trafficking, as authorized by section 107(b)(2) of the TVPA. In fiscal year 2002, OVC awarded these funds to nonprofit, nongovernmental victim services providers to develop, expand, or strengthen services for victims of trafficking. According to DOJ officials, in fiscal year 2003, DOJ decided to use a portion of these funds to award BJA task forces grants on trafficking with the goal of expanding services for victims by identifying more victims and connecting them with needed services. In subsequent fiscal years, both OVC and BJA awarded grants with these funds. In addition, FBI, ICE, the Executive Office for U.S. Attorneys, and CRT/CS have emergency funds that may be used to provide immediate services to victims when services cannot be provided through other programs that support trafficking victims. According to OVC, the agencies coordinate these efforts through it to ensure that any use of emergency funds is appropriate, maximizes the use of trafficking appropriation dollars when they are available, and occurs when no other funds are available. The following case studies illustrate several of the characteristics of human trafficking described in this report, including (1) the diverse purposes for which people are trafficked and the circumstances in which they work, both legally and illegally; (2) the variation in the number of victims; (3) case complexity; and (4) coordination among law enforcement and nongovernmental organizations in caring for the victims and prosecuting the perpetrators. United States v. Kil Soo Lee—the largest trafficking prosecution before a federal court—resulted from an investigation involving five languages, several countries and states, and numerous federal agencies and nongovernmental organizations. Between September 1998 and December 2000, Lee recruited 250 skilled garment workers—mostly young women who had paid $5,000 and $8,000 recruitment fees—from China and Vietnam, locating his garment factory, named Daewoosa Samoa, in American Samoa to use the “Made in America” label and avoid drawing attention to his operation. The workers believed the fees to be legitimate payment in exchange for new jobs possibly leading to a better life. Instead, they lived, ate, and slept in barracks on the factory compound, surrounded by fences that remained locked and guarded during working hours. Lee and his associates seized passports—threatening the workers with deportation, bankruptcy, severe financial hardship to family members back home, and false arrest—and withheld food and pay. In March 1999, workers asked to be paid for several months’ labor. Kil Soo Lee refused to pay them, and when the workers protested, he locked them inside of the Daewoosa compound and refused to provide them with food. Several workers climbed over the fence at night and contacted local residents to complain and seek food. Upon finding out that workers had left the compound, Kil Soo Lee notified the American Samoan police that the workers were causing a disturbance and had the police arrest three of the female workers who tried to leave the company grounds. The workers were unable to speak English or Samoan, and thus were unable to communicate the true version of events to the police. Attempting to communicate with the outside world, another worker threw a handwritten note from the window of the company car after visiting jailed coworkers. This note was found and passed on to the U.S. Department of Labor, which investigated allegations that Kil Soo Lee had withheld the workers’ pay. Because of the investigation, DOL required Lee to make restitution to the affected employees. Following additional complaints and allegations that Lee was requiring workers to kick back the back wage payments, DOL again investigated. The garment manufacturers for which Lee was producing goods provided the back wage restitution for the underpaid employees in this second investigation. In November 2000, workers protested again by slowing production. On Lee’s direction, guards entered the factory and conducted a mass beating of the Vietnamese, inflicting severe injuries on several. Local police investigated the uprising but dismissed the case, believing the guards’ accounts that the Vietnamese workers had attacked the Samoans. The Occupational Safety and Health Administration of DOL then arrived to conduct inspections of the Daewoosa facility from November to February 2001, citing violations of workplace safety noted from earlier investigations concluded in June 1999. In March 2001, FBI agents and CRT/CS prosecutors traveled to American Samoa to investigate. They conducted interviews, surveyed the factory, and seized records, computers, and other evidence. Kil Soo Lee was then arrested on March 23, 2001. He and four other defendants were indicted in August of that year on 22 charges of subjecting workers to involuntary servitude. The trial began in October 2002 and lasted 4 months. During prosecution, the nature of the crime and the cultural and linguistic backgrounds of the workers posed challenges for the Civil Rights Division. Attorneys had to prove that the workers—now witnesses in the trial— were victims rather than simply violators of labor and immigration laws. Lee had already had some of them deported, while others scattered to 20 states around the country after being given temporary immigration status to testify. During the pretrial preparations and the trial, more than 200 victims had to be housed and fed, while the sick and injured required medical care. Because the victims had limited or no English facility (languages spoken included Chinese, Vietnamese, Korean, and Samoan), interpreters had to be provided. Agents and attorneys also had to gain the victims’ trust, overcoming their fears of law enforcement and authority, which Kil Soo Lee and the other defendants had earlier exploited. Finally, the victims needed to be assured that no harm would come from the proceedings either to them or their families back home and that they had done nothing to draw shame or fear of exposure upon themselves. In August 2001, two of the American Samoan guards entered guilty pleas to participating in the conspiracy to violate the civil rights of the garment workers and were later sentenced to 70 and 51 months in prison. Two codefendants were acquitted on all charges. In February 2002, Kil Soo Lee was convicted of conspiracy to violate the civil rights of the workers, 11 counts of involuntary servitude, 1 count of extortion, and 1 count of money laundering. Lee, who was in his mid-50s, was sentenced in June 2005 to 40 years in prison, which at that time was the highest sentence handed down in a trafficking/slavery case that did not result in death, and ordered to pay restitution of $1,826,087.94. On April 16, 2002, the High Court of American Samoa in a separate consolidated civil case also ordered Daewoosa Samoa, Ltd. to pay $3.5 million in back wages to the workers. The Carreto case came to the U.S. government’s attention on a tip from Mexican authorities that a victim was believed to have been held and forced into prostitution. An investigation led agents to locations where a number of young women and their traffickers were arrested. The defendants were members or associates of an extended family whose principal business was reaping the profits from compelling young Mexican women into prostitution through force, fraud, and coercion. The defendants, who often lured the women into romantic relationships, used deception, psychological manipulation, and false promises, along with physical beatings and rapes, to overcome the will of the victims, compel them into prostitution, and force them to turn over virtually all the proceeds to the defendants. During the investigation of this case, ICE and DOJ coordinated with international nongovernmental organizations, the Mexican government, and Mexican attorneys to remove the victims’ children from the custody of the Carreto family, thereby removing one of the last means of control the Carreto family had exerted over the victims. The investigation revealed extensive sex trafficking activity between Mexico and the United States, prompting initiatives to coordinate multijurisdictional, multi-agency investigations. On November 16, 2004, a federal grand jury returned a 27-count superseding indictment charging Josue Flores Carreto, Gerardo Flores Carreto, Daniel Perez Alonso, Eliu Carreto Fernandez, Consuelo Carreto Valencia, and Maria de los Angeles Velasquez Reyes with victimizing nine young Mexican women. The indictment charged the six defendants with counts of conspiracy to commit sex trafficking, sex trafficking, attempted sex trafficking, forced labor, violation of the Mann Act, conspiracy to import aliens for immoral purpose, and alien smuggling. Two additional defendants, Edith Mosquera de Flores and Eloy Carreto Reyes were charged separately by complaint. On April 5, 2005, on the morning trial in this case was to begin, Gerardo Flores Carreto, Josue Flores Carreto, and Daniel Perez Alonso pled guilty to all charges in the 27-count indictment. On April 27, 2006, Gerardo Flores Carreto and Josue Flores Carreto were each sentenced to 50 years in prison. Daniel Perez Alonso was sentenced to 25 years in prison. Edith Flores had previously been sentenced to 16 months. On June 1, 2006, Eliu Carreto Fernandez was sentenced to 80 months in prison. Eloy Carreto Reyes is still pending sentencing. On January 19, 2007, the Mexican government extradited defendant Consuelo Carreto Valencia to the United States, along with 14 other criminal defendants, in an extradition that Attorney General Gonzales lauded as unprecedented in its scope and importance. Consuelo Carreto Valencia, the mother of two of the lead defendants, is charged with conspiring with the other defendants to compel the victims into forced prostitution. An additional defendant, Maria de los Angeles Reyes, remains in Mexico, where she has previously been arrested on related charges. CRT/CS is seeking her extradition. Robert N. Goldenkoff (202) 512-2757. In addition to the individual named above, Glenn G. Davis, Barbara A. Stolz, Susanna R. Kuebler, Richard Ascarate, Kelly Bradley, Erin Claussen, Frances Cook, Stuart Kaufman, and Elizabeth Curda made significant contributions to the report. Human Trafficking: Better Data, Strategy, and Reporting Needed to Enhance U.S. Anti-Trafficking Efforts Abroad, GAO-06-825 (Washington, D.C.: July 18, 2006) Human Trafficking: Monitoring and Evaluation of International Projects Are Limited, but Experts Suggest Improvements, GAO-07-1034 (Washington, D.C.: July 26, 2007) Results-oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15 (Washington, D.C.: Oct. 21, 2005) International Crime Control: Sustained Executive-Level Coordination of Federal Response Needed, GAO-01-629 (Washington, D.C.: August 13, 2001) Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism, GAO-04-408T (Washington, D.C.: February 3, 2004) Organized Crime: Issues Concerning Strike Forces, GAO/GGD-89-67 (Washington, D.C.: April 11, 1989) Prescription Drugs: Strategic Framework Would Promote Accountability and Enhance Efforts to Enforce the Prohibitions on Personal Importation, GAO-05-372 (Washington, D.C.: September 8, 2005) Community Services Block Grant Program: HHS Should Improve Oversight by Focusing Monitoring and Assistance Efforts on Areas of High Risk, GAO-06-627 (Washington, D.C.: June 29, 2006)
Human trafficking is a transnational crime whose victims include men, women, and children and may involve violations of labor, immigration, antislavery, and other criminal laws. To ensure punishment of traffickers and protection of victims, Congress passed the Trafficking Victims Protection Act of 2000 (TVPA), which is subject to reauthorization in 2007. The Departments of Justice (DOJ) and Homeland Security (DHS) lead federal investigations and prosecutions of trafficking crimes. As requested, this report discusses (1) key activities federal agencies have undertaken to combat human trafficking crimes, (2) federal efforts to coordinate investigations and prosecutions of these crimes, and (3) how the Bureau of Justice Assistance (BJA) supported federally funded state and local human trafficking task forces. GAO reviewed strategies, reports, and other agency documents; analyzed trafficking data; and interviewed agency officials and task force members. Since the enactment of the TVPA in 2000, federal agencies have (1) investigated allegations of trafficking crimes, leading to 139 prosecutions;(2) provided training and implemented state and local initiatives to support investigations and prosecutions; and (3) established organizational structures, agency-level goals, plans, or strategies. For example, agencies have trained new and current personnel on investigating and prosecuting trafficking in persons crimes through their agency training academies and centers, provided Web-based training, and developed and disseminated guidance on case pursuance. Agencies have also sponsored outreach and training to state and local law enforcement, nongovernmental organizations, and the general public through a toll-free complaint line, newsletters, national conferences, and model legislation. Finally, some agencies have established special units or plans for carrying out their antitrafficking duties. Federal agencies have coordinated across agencies on investigations and prosecutions of trafficking crimes on a case-by-case basis, determined by individual case needs, and established relationships among law enforcement officials across agencies. For example, several federal agencies worked together to resolve a landmark trafficking case involving over 250 victims. However, DOJ and DHS officials have identified the need to advance and expand U.S. efforts to combat trafficking through more collaborative and proactive strategies to identify trafficking victims. Prior GAO work on interagency collaboration has shown that a strategic framework that includes, among other things, a common outcome, mutually reinforcing strategies, and compatible polices and procedures to operate across agency boundaries can help enhance and sustain collaboration among federal agencies dealing with issues that are national in scope and cross agency jurisdictions. To support U.S. efforts to investigate trafficking in persons, BJA has awarded grants of up to $450,000 to establish 42 state and local human trafficking law enforcement task forces. BJA has funded the development of a train-the-trainer curriculum and a national conference on human trafficking and taken further steps to respond to task force technical assistance needs. Nevertheless, task force members from the seven task forces we contacted and DOJ officials identified continued and additional assistance needs. BJA does not have a technical assistance plan for its human trafficking task force grant program. Prior GAO work has shown the need for agencies that administer grants or funding to state and local entities to implement a plan to focus technical assistance on areas of greatest need. BJA officials said they were preparing a plan to provide additional and proactive technical assistance to the task forces, but as of June 2007 had not received the necessary approvals.
Before enactment of the Employee Retirement Income Security Act of 1974, few rules governed the funding of defined benefit pension plans, and participants had no guarantees that they would receive the benefits promised. Among other things, ERISA established rules for funding defined benefit pension plans and created the PBGC to protect the benefits of plan participants in the event that plan sponsors could not meet the benefit obligations under their plans. More than 34 million workers and retirees in about 30,000 single-employer defined benefit plans rely on PBGC to protect their pension benefits. PBGC finances the liabilities of underfunded terminated plans partially through premiums paid by plan sponsors. Currently, plan sponsors pay a flat-rate premium of $19 per participant per year; in addition, some plan sponsors pay a variable-rate premium, which was added in 1987, to provide an incentive for sponsors to better fund their plans. For each $1,000 of unfunded vested benefits, plan sponsors pay a premium of $9. In fiscal year 2004, PBGC received nearly $1.5 billion in premiums, including more than $800 million in variable rate premiums, but paid out more than $3 billion in benefits to plan participants or their beneficiaries. The single-employer program has had an accumulated deficit—that is, program assets have been less than the present value of benefits and other obligations—for much of its existence. (See fig. 1.) In fiscal year 1996, the program had its first accumulated surplus, and by fiscal year 2000, the accumulated surplus had increased to about $10 billion, in 2002 dollars. However, the program’s finances reversed direction in 2001, and at the end of fiscal year 2002, its accumulated deficit was about $3.6 billion. In July 2003, we designated the single-employer insurance program as “high risk,” given its deteriorating financial condition and the long-term vulnerabilities of the program. In fiscal year 2004, PBGC’s single-employer pension insurance program incurred a net loss of $12.1 billion and its accumulated deficit increased to $23.3 billion, up from $11.2 billion a year earlier. Furthermore, PBGC estimated that total underfunding in single-employer plans exceeded $450 billion, as of the end of fiscal year 2004. Existing laws governing pension funding and premiums have not protected PBGC from accumulating a significant long-term deficit and have not limited PBGC’s exposure to moral hazard from the companies whose pension plans it insures. The pension funding rules, under ERISA and the IRC, were not designed to ensure that plans have the means to meet their benefit obligations in the event that plan sponsors run into financial distress. Meanwhile, in the aggregate, premiums paid by plan sponsors under the pension insurance system have not adequately reflected the financial risk to which PBGC is exposed. Accordingly, defined benefit plan sponsors, acting rationally and within the rules, have been able to turn significantly underfunded plans over to PBGC, thus creating PBGC’s current deficit. Earlier this year, the Administration released a proposal that aims to address many of the structural problems that PBGC faces by calling for changes in the funding rules and premium structure, among other things. Meanwhile, employers who responsibly manage their defined benefit pension plans are concerned about their exposure to additional funding and premium uncertainties. As the PBGC takeovers of severely underfunded plans suggest, the IRC minimum funding rules have not been designed to ensure that plan sponsors contribute enough to their plans to pay all the retirement benefits promised to date. The amount of contributions required under IRC minimum funding rules is generally the amount needed to fund that year’s “normal cost” – benefits earned during that year plus that year’s portion of other liabilities that are amortized over a period of years. Also, the rules require the sponsor to make an additional contribution if the plan is underfunded to a specified extent as defined in the law. However, sponsors of underfunded plans may sometimes avoid or reduce minimum funding contributions if they have earned funding credits as a result of favorable experience, such as contributing more than the minimum in the past. For example, contributions beyond the minimum may be recognized as a funding credit. These credits are not measured at their market value and accrue interest each year, according to the plan’s long-term expected rate of return on assets. If the market value of the assets falls below the credited amount, and the plan is terminated, the assets in the plan will not suffice to pay the plan’s promised benefits. Thus, some very large and significantly underfunded plans have been able to remain in compliance with the current funding rules while making little or no contributions in the years prior to termination (e.g., Bethlehem Steel). Further, under current funding rules, plan sponsors can increase plan benefits for underfunded plans, even in some cases where the plans are less than 60 percent funded. This may create an incentive for financially troubled sponsors to increase pension benefits, possibly in lieu of wage increases, even if their plans have insufficient funding to pay current benefit levels. Thus, plan sponsors and employees that agree to benefit increases from underfunded plans as a sponsor is approaching bankruptcy can essentially transfer this additional liability to PBGC, potentially exacerbating the agency’s financial condition. In addition, many defined benefit plans offer employees “shutdown benefits,” which provide employees additional benefits, such as significant early retirement benefit subsidies in the event of a plant shutdown or permanent layoff. In general, plant shutdowns are inherently unpredictable, so that it is difficult to recognize the costs of shutdown benefits in advance and current law does not include the cost of benefits arising from future unpredictable contingent events. Under current law, PBGC is responsible for at least a portion of any benefit increases, including shutdown benefits, even if the benefit was added to the plan within 5 years of plan termination. However, many of these provisions were included in plans years ago. As a result, shutdown benefits pose a problem for PBGC not only because they can dramatically and suddenly increase plan liabilities without adequate funding safeguards, but also because the related additional benefit payments drain plan assets. Finally, because many plans allow lump sum distributions, plan participants in an underfunded plan may have incentives to request such distributions. For example, where participants believe that the PBGC guarantee may not cover their full benefits, many eligible participants may elect to retire and take all or part of their benefits in a lump sum rather than as lifetime annuity payments, in order to maximize the value of their accrued benefits. In some cases, this may create a “run on the bank,” exacerbating the possibility of the plan’s insolvency as assets are liquidated more quickly than expected, potentially leaving fewer assets to pay benefits for other participants. PBGC’s current premium structure does not properly reflect risks to the insurance program. The current premium structure relies heavily on flat- rate premiums that, since they are unrelated to risk, result in large cost shifting from financially troubled companies with underfunded plans to healthy companies with well-funded plans. PBGC also charges plan sponsors a variable-rate premium based on the plan’s level of underfunding. However, these premiums do not consider other relevant risk factors, such as the economic strength of the sponsor, plan asset investment strategies, the plan’s benefit structure, or the plan’s demographic profile. PBGC is currently operated somewhat more on a social insurance model since it must cover all eligible plans regardless of their financial condition or the risks they pose to the solvency of the insurance program. In addition to facing firm-specific risk that an individual underfunded plan may terminate, PBGC faces market risk that a poor economy may lead to widespread underfunded terminations during the same period, potentially causing very large losses for PBGC. Similarly, PBGC may face risk from insuring plans concentrated in vulnerable industries affected by certain macroeconomic forces such as deregulation and globalization that have played a role in multiple bankruptcies over a short time period, as happened in the airline and steel industries. One study estimates that the overall premiums collected by PBGC amount to about 50 percent of what a private insurer would charge because its premiums do not adequately account for these market risks. Others note that it would be hard to determine the market rate premium for insuring private pension plans because private insurers would probably refuse to insure poorly funded plans sponsored by weak companies. Despite a series of reforms over the years, current pension funding and insurance laws create incentives for financially troubled firms to use PBGC in ways that Congress did not intend when it formed the agency in 1974. PBGC was established to pay the pension benefits of participants in the event that an employer could not. As pension policy has developed, however, firms with underfunded pension plans may come to view PBGC coverage as a fallback, or “put option,” for financial assistance. The very presence of PBGC insurance may create certain perverse incentives that represent moral hazard—struggling plan sponsors may place other financial priorities above “funding up” their pension plans because they know PBGC will pay guaranteed benefits. Firms may even have an incentive to seek Chapter 11 bankruptcy in order to escape their pension obligations. As a result, once a plan sponsor with an underfunded pension plan experiences financial difficulty, existing incentives may exacerbate the funding shortfall for PBGC while also affecting the competitive balance within an industry. This should not be the role for the pension insurance system. This moral hazard has the potential to escalate, with the initial bankruptcy of firms with underfunded plans creating a vicious cycle of bankruptcies and terminations. Firms with onerous pension obligations and strained finances could see PBGC as a means of shedding these liabilities, thereby providing them with a competitive advantage over firms that deliver on their pension commitments. This would also potentially subject PBGC to a series of terminations of underfunded plans in the same industry, as we have already seen with the steel and airline industries in the past 20 years. In addition, current pension funding and pension accounting rules may also encourage plans to invest in riskier assets to benefit from higher expected long-term rates of return. In determining funding requirements, a higher expected rate of return on pension assets means that the plan needs to hold fewer assets in order to meet its future benefit obligations. And under current accounting rules, the greater the expected rate of return on plan assets, the greater the plan sponsor’s operating earnings and net income. However, with higher expected rates of return comes greater risk of investment loss, which is not reflected in the pension insurance program’s premium structure. Investments in riskier assets with higher expected rates of return may allow financially weak plan sponsors and their plan participants to benefit from the upside of large positive returns on pension plan assets without being truly exposed to the risk of losses. The benefits of plan participants are guaranteed by PBGC, and weak plan sponsors that enter bankruptcy can often have their plans taken over by PBGC. Earlier this year, the Administration released a proposal for strengthening funding of single-employer pension plans. The Administration’s proposal focuses on three areas: reforming the funding rules to ensure pension promises are kept by improving incentives for funding plans adequately; improving disclosure to workers, investors, and regulators about pension plan status; and adjusting premiums to better reflect a plan’s risk and ensure the pension insurance system’s financial solvency. Among other things, the proposal would require all underfunded plans to pay risk-based premiums and it would empower PBGC’s board to adjust the risk-based premium rate periodically so that premium revenue is sufficient to cover expected losses and to improve PBGC’s financial condition. Employer groups have expressed concern about their exposure to additional funding and premium uncertainties and have claimed that the Administration’s proposal may strengthen PBGC’s financial condition at the expense of defined benefit plan sponsors. For example, one organization has stated that in its view, the current proposal would result in fewer defined benefit plans, lower benefits, and more pressures on troubled companies. PBGC has proactively attempted to forecast and mitigate the risks that it faces. The Pension Insurance Modeling System (PIMS), created by PBGC to forecast claim risk, has projected a high probability of future deficits for the agency. However, the accuracy of the projections produced by the model is unclear. Also, through its Early Warning Program, PBGC negotiates with companies that have underfunded pension plans and that engage in business transactions that could adversely affect their pensions. Over the years, these negotiations have directly led to billions of dollars of pension plan contributions and other protections by the plan sponsors. Moreover, PBGC has begun an initiative called the Office of Risk Assessment that combines aspects of both PIMS and the Early Warning Program and will enable the agency to better quantitatively analyze claim risks associated with individual plan sponsors. PBGC has also changed its investment strategy and decreased its equity exposure to better shield itself from market risks. However, despite these efforts, the agency, unlike other federal insurance programs, ultimately lacks the authority to effectively protect itself, such as by adjusting premiums according to the risks it faces. Over the long term, many variables, such as interest rates and equity returns, affect the level of PBGC claims. Moreover, large claims from a small number of bankruptcies constitute a majority of the risk that PBGC faces. Consequently, PBGC created the Pension Insurance Modeling System—a stochastic simulation model that quantifies risk and exposure for the agency over the long run. PIMS simulates the flows of claims that could develop under thousands of combinations of various macroeconomic and company and plan-specific data. In lieu of predicting future bankruptcies, PIMS is designed to generate probabilities for future claims. In recent annual reports, PBGC has discussed the methodologies used to develop PIMS. Furthermore, as far back as 1998, PBGC has reported PIMS results that forecast the possibility of large deficits for the agency. For example, at fiscal year end 2003—the most recent year for which PBGC has released an annual report—the model’s simulations forecasted about an 80 percent probability of deficit by the year 2013. This included a 10 percent probability of the deficit reaching $49 billion within this time frame. These forecasts, made at the end of fiscal year 2003, did not include the $14.7 billion in losses that PBGC experienced from terminated plans in fiscal year 2004. Therefore, PIMS appears to have understated the extent of PBGC’s long-term deficit, given that by the end of fiscal year 2004, the agency’s cumulative deficit had already grown to $23.3 billion. The extent to which PIMS can accurately assess future claims is unclear. There is simply too much uncertainty about the future, with respect both to the performance of the economy and of companies that sponsor defined benefit pension plans. It is difficult to accurately forecast which industries and companies will face economic pressures resulting in bankruptcies and PBGC claims. Furthermore, because PBGC’s risk lies primarily in a relatively small number of large plans, the failure or survival of any single large plan may lead to significant variance between PBGC’s actual claims and the projected claims reported by PBGC in its annual reports. Academic papers report varying rates of success in predicting bankruptcy with various models that measure companies’ cash flows or financial ratios, such as asset-to-liability ratios. One paper we reviewed reports that one model succeeded at a rate of 96 percent in predicting bankruptcies 1 year in advance and a rate of 70 percent for predicting bankruptcies 5 years in advance. However, another paper concludes that no single bankruptcy prediction model proposed in the existing literature is entirely satisfactory at differentiating between bankrupt and nonbankrupt firms and that none of the models can reliably predict bankruptcy more than 2 years in advance. PBGC’s Early Warning Program is designed to ensure that pensions are protected by negotiating agreements with certain companies engaging in business transactions or events that could adversely affect their pension plans. Companies of particular interest to the PBGC are those that are financially troubled, have underfunded pension plans, and are engaged in transactions such as restructurings, leveraged buyouts, spin-offs, and payments of extraordinary dividends, to name a few. The Early Warning Program proactively monitors financial information services and news databases to identify these potentially risky transactions in a timely fashion. If PBGC, after completing an extensive screening process, concludes that a transaction could result in a significant loss to the pension plan, the agency will seek to negotiate with the company to obtain protections for the plan. The Early Warning Program thus raises awareness of pension underfunding, may change corporate behavior, and may allow PBGC to prevent losses before they occur. Under the program, PBGC currently monitors about 3,200 pension plans covering about 29 million participants. Since 1992, the program has protected over 2 million pension plan participants through about 100 settlement agreements valued at over $18 billion (one settlement accounted for about $10 billion). Some recent representative cases include the 2004 settlement with Invensys that provided for over $175 million of additional cash contributions to the pension plan and the 2005 agreement with Crown Petroleum whereby the plan has been assumed by a financially sound parent company and $45 million of additional cash will be contributed to the pension plan. PBGC has recently undertaken an initiative to create an Office of Risk Assessment, which will focus on improving the agency’s ability to quantitatively model individual firms’ claim potential. According to PBGC, neither PIMS nor the Early Warning Program provides this information. For example, PIMS projects systemwide surpluses and deficits and is not designed to predict specific company results. Meanwhile, the Early Warning Program targets specific companies, but in a manner that is qualitative in nature. The Office of Risk Assessment, however, will attempt to combine the concepts of both tools and better attempt to quantitatively analyze the claim risk associated with individual companies. PBGC has consulted with other federal agencies, such as the Federal Deposit Insurance Corporation (FDIC), that have implemented similar approaches for assessing risk. In March 2003, FDIC established a Risk Analysis Center. Guided by FDIC’s National Risk Committee, which is composed of senior managers, the center is intended to “monitor and analyze economic, financial, regulatory and supervisory trends, and their potential implications for the continued financial health of the banking industry and the deposit insurance funds.” The center does so by bringing together FDIC bank examiners, economists, financial analysts, resolutions and receiverships specialists, and other staff members. These members represent several FDIC organizational units and use information from a variety of sources, including bank examinations and internal and external research. According to FDIC, the center serves as a clearinghouse for information, including monitoring and analyzing economic and financial developments and informing FDIC management and staff of these developments. FDIC officials believe that the center enables them to be proactive in identifying industry trends and developing comprehensive solutions to address significant risks to the banking industry. In early 2004, PBGC adopted a new investment strategy to better manage its approximately $40 billion in assets. Although many factors that affect PBGC’s financial health are beyond the agency’s control, a well-crafted investment strategy is one of the few tools PBGC has to proactively manage the financial risks facing the pension insurance program. Under the new investment policy, PBGC is decreasing its asset allocation in equities from 37 percent as of fiscal year end 2003 to within a range of 15 to 25 percent. Since many of the pension plans that PBGC insures are already heavily invested in equities, some pension and investment experts have said that the agency can create more financial stability by establishing an asset allocation that can hedge against losses in the equity markets. The equity exposure reduction ensures that PBGC’s own financial condition will not deteriorate to the same degree as the assets in the pension plans it insures. However, PBGC continues to benefit when equity markets rise because the plans it insures will rise in value. In addition, PBGC claims that this strategy moves the agency closer to the asset mix typically associated with private sector annuity providers. However, it is too soon tell what effects this new investment strategy will have on PBGC’s long-term financial condition. Although PIMS and the Early Warning Program help PBGC assess and manage risk to some extent, PBGC lacks the regulatory authority available to other federal insurance programs, such as the FDIC, to effectively protect itself from risk. Whereas PBGC’s premiums are determined by statute, Congress provided FDIC the flexibility to set premiums and adjust them every 6 months based on its analysis of risk to the deposit insurance system. Furthermore, FDIC can reject applications to insure deposits at depository institutions when it determines that a depository institution carries too much risk to the Bank Insurance Fund. By contrast, PBGC must insure all plans eligible for PBGC’s insurance coverage. Last, FDIC may issue formal and informal enforcement actions for deposit institutions with significant weaknesses or those operating in a deteriorated financial condition. When necessary, the FDIC may oversee the re-capitalization, merger, closure, or other resolution of the institution. By contrast, PBGC is limited to taking over a plan in poor financial condition to prevent it from accruing additional liabilities. PBGC has no authority to seize assets of the plan sponsor, who is responsible for adequately funding the plan. The current financial challenges facing the PBGC reflect, in part, the significant changes that have taken place in employer-sponsored pensions since the passage of ERISA in 1974. Given the decline in defined benefit plans over the last two decades, it is time to make changes in the rules governing the defined benefit system and reexamine PBGC’s role as an insurer. In recent years an irreconcilable tension has arisen between PBGC’s role as a social insurance program and its mandate to remain financially self-sufficient. Unless something reverses the decline in defined benefit pension coverage, PBGC may have a shrinking plan and participant base to support the program in the future and may face the likelihood of a participant base concentrated in certain potentially more vulnerable industries. In this regard, effectively addressing the uncertainties associated with cash balance and other hybrid pension plans may serve to help slow the decline in defined benefit plans. One of the underlying assumptions of the current insurance program has been that there would be a financially stable and growing defined benefit system. However, the current financial condition of PBGC and the plans that it insures threaten the retirement security of millions of Americans because termination of severely underfunded plans can significantly reduce the benefits participants receive. It also poses risks to the general taxpaying public, who ultimately could be made responsible for paying benefits that PBGC is unable to afford. To help PBGC manage the risks to which it is exposed, Congress may wish to grant PBGC additional authorities to set premiums or limit the guarantees on the benefits it pays to those plans it assumes. However, these changes would not be sufficient in themselves because the primary threat to PBGC and the defined benefit pension system lies in the failure of the funding rules to ensure that retirement benefit obligations are adequately funded. In any event, any legislative changes to address the challenges facing PBGC should provide plan sponsors with incentives to increase plan funding, improve the transparency of the plan’s financial information, and provide a means to hold sponsors accountable for funding their plans adequately. However, policymakers must also be careful to balance the need for changes in the current funding rules and premium structure with the possibility that any changes could expedite the exit of healthy plan sponsors from the defined benefit system while contributing to the collapse of firms with significantly underfunded plans. The long-term financial health of PBGC and its ability to protect workers’ pensions is inextricably bound to the underlying change in the nature of the risk that it insures, and implicitly to the prospective health of the defined benefit system. Options that serve to revitalize the defined benefit system could stabilize PBGC’s financial situation, although such options may be effective only over the long term. Our greater challenge is to fundamentally consider the manner in which the federal government protects the defined benefit pensions of workers in this increasingly risky environment. We look forward to working with Congress on this crucial subject. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other members of the Subcommittee may have. For further information, please contact Barbara Bovbjerg at (202) 512-7215 or George Scott at (202) 512-5932. Other individuals making key contributions to this testimony included David Eisenstadt, Benjamin Federlein, and Joseph Applebaum. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
More than 34 million workers and retirees in about 30,000 singleemployer defined benefit plans rely on a federal insurance program managed by the Pension Benefit Guaranty Corporation (PBGC) to protect their pension benefits. However, the insurance program's long-term viability is in doubt and in July 2003 we placed the single-employer insurance program on our high-risk list of agencies with significant vulnerabilities for the federal government. In fiscal year 2004, PBGC's single-employer pension insurance program incurred a net loss of $12.1 billion for fiscal year 2004, and the program's accumulated deficit increased to $23.3 billion from $11.2 billion a year earlier. Further, PBGC estimated that underfunding in single-employer plans exceeded $450 billion as of the end of fiscal year 2004. This testimony provides GAO's observations on (1) some of the structural problems that limit PBGC's ability to protect itself from risk and (2) steps PBGC has taken to forecast and manage the risks that it faces. Existing laws governing pension funding and premiums have not protected PBGC from accumulating a significant long-term deficit and have exposed PBGC to "moral hazard" from the companies whose pension plans it insures. The pension funding rules, under the Employee Retirement Income Security Act (ERISA) and the Internal Revenue Code (IRC), were not designed to ensure that plans have the means to meet their benefit obligations in the event that plan sponsors run into financial distress. Meanwhile, in the aggregate, premiums paid by plan sponsors under the pension insurance system have not adequately reflected the financial risk to which PBGC is exposed. Accordingly, PBGC faces moral hazard, and defined benefit plan sponsors, acting rationally and within the rules, have been able to turn significantly underfunded plans over to PBGC, thus creating PBGC's current deficit. Despite the challenges it faces, PBGC has proactively attempted to forecast and mitigate its risks. The Pension Insurance Modeling System, created by the PBGC to forecast claim risk, has projected a high probability of future deficits for the agency. However, the accuracy of the projections produced by the model is unclear. Through its Early Warning Program, PBGC negotiates with companies that have underfunded pension plans and that engage in business transactions that could adversely affect their pensions. Over the years, these negotiations have directly led to billions of dollars of pension plan contributions and other protections by the plan sponsors. Moreover, PBGC has changed its investment strategy and decreased its equity exposure to better shield itself from market risks. However, despite these efforts, the agency ultimately lacks the authority, unlike other federal insurance programs, to effectively protect itself.
Currently located within the Department of the Treasury, the CDFI Fund was authorized in 1994 and has received appropriations totaling $225 million through fiscal year 1998. The 1995 Rescissions Act limited the Fund to 10 full-time equivalent (FTE) staff for fiscal years 1995 and 1996, but for fiscal year 1998, the Fund has a FTE ceiling of 35 staff. As of May 8, 1998, the Fund had 27 full-time and 2 part-time staff. The Fund’s overall performance is subject to the Results Act. This act seeks to improve the management of federal programs and their effectiveness and efficiency by establishing a system for agencies to set goals for performance and measure the results. Under the act, federal agencies must develop a strategic plan that covers a period of at least 5 years and includes a mission statement, long-term general goals, and strategies for reaching those goals. Agencies must report annually on the extent to which they are meeting their annual performance goals and identify the actions needed to reach or modify the goals they have not met. The Fund completed its final plan in September 1997 and is currently considering revisions to that plan. While the CDFI Fund has established a system for measuring awardees’ performance in the CDFI program, this system emphasizes activities over accomplishments and does not always include measures for key aspects of goals. In addition, baseline information that was available to the Fund seldom appears in the Fund’s performance measurement schedule. A more comprehensive performance measurement system would provide better indicators for monitoring and evaluating the program’s results. The CDFI Fund’s progress in developing performance goals and measures for awardees in the CDFI program is mixed. On the one hand, the Fund has entered into assistance agreements with most of the 1996 awardees. As the CDFI Act requires, these assistance agreements include performance measures that (1) the Fund negotiated with the awardees and (2) are generally based on the awardees’ business plans. On the other hand, the Fund’s performance goals and measures fall somewhat short of the standards for performance measures established in the Results Act. Although awardees’ assistance agreements are not subject to the Results Act, the act establishes performance measurement standards for the federal government, including the CDFI Fund. In the absence of specific guidance on performance measures in the CDFI Act, we drew on the Results Act’s standards for discussion purposes. The assistance agreements called for under the CDFI Act require awardees to comply with multiple provisions, including the accomplishment of agreed-upon levels of performance by the final evaluation date, typically 5 years in the future. As of January 1998, the Fund had entered into assistance agreements with 26 of the 31 awardees for 1996. We found, on the basis of our six case studies, that the Fund had negotiated performance goals that met the statutory requirements and established goals for awardees that match the Fund’s intended purpose, extensively involved the awardees in crafting their planned performance, and produced a flexible schedule for designing goals and measures. According to the Results Act, both activity measures, such as the number of loans made, and accomplishment measures, such as the number of new low-income homeowners, are useful measures. However, the act regards accomplishment measures as more effective indicators of a program’s results because such measures identify the impact of the activities performed. Our survey of CDFIs nationwide, including the 1996 awardees, and our review of six case study awardees’ business plans showed that CDFIs use both types of measures to assess their progress toward meeting their goals. Yet our review of the 1996 awardees’ assistance agreements revealed a far greater use of activity measures. As a result, the assistance agreements focus primarily on what the awardees will do, rather than on how their activities will affect the distressed communities. According to most of the case study awardees, difficulties in isolating and measuring the results of community development efforts and concerns about the effects of factors outside the awardees’ control inhibited the awardees’ use of accomplishment measures. According to the Results Act, goals and measures should be related and clear. We found that most of the goals and measures were related; however, in some agreements, the measures did not address all key aspects of the goals. Finally, under the Results Act, clarity in performance measurement is best achieved through the use of specific units, well-defined terms, and baseline and target values and dates. While the measures in the agreements included most of these elements, they generally lacked baseline values and dates. Fund officials told us that they used baseline values and dates in negotiating the performance measures, but this information did not appear in the assistance agreements themselves. Therefore, without information contained in awardees’ files, it is difficult to determine the level of increase or contribution the investment is intended to achieve. Refining the awardees’ goals and measures to meet the Results Act will facilitate the Fund’s assessment of the awardees’ progress over time. The Fund is taking steps to avoid some of the initial shortcomings in future agreements and is seeking to enhance its expertise and staffing. Although the Fund has developed reporting requirements for awardees to collect information for monitoring their performance, it lacks documented postaward monitoring procedures for assessing their compliance with their assistance agreements, determining the need for corrective actions, and verifying the accuracy of the information collected. In addition, the Fund has not yet established procedures for evaluating the impact of awardees’ activities. The effectiveness of the Fund’s monitoring and evaluation systems will depend, in large part, on the quality of the information being collected through the required reports and the Fund’s assessment of awardees’ compliance and the impact of awardees’ activities. Primarily because of statutorily imposed staffing restrictions in fiscal years 1995 and 1996 and subsequent departmental hiring restrictions, the Fund has had a limited number of staff to develop and implement its monitoring and evaluation systems. In fiscal year 1998, it began to hire management and professional staff to develop monitoring and evaluation policies and procedures. The Fund has established quarterly and annual reporting requirements for awardees in their assistance agreements. Each awardee is to describe its progress toward its performance goals, demonstrate its financial soundness, and maintain appropriate financial information. However, according to an independent audit recently completed by KPMG Peat Marwick, the Fund lacks formal, documented postaward monitoring procedures to guide Fund staff in their oversight of awardees’ activities. In addition, Fund officials indicated that they had not yet established a system to verify information submitted by awardees through the reporting processes. Fund staff told us that they had not developed postaward monitoring procedures because of the CDFI program’s initial staffing limits. Now that additional staff are in place, they have begun to focus their attention on monitoring issues, including those identified by KPMG Peat Marwick. The CDFI statute also specifies that the Fund is to annually evaluate and report on the activities carried out by the Fund and the awardees. According to the Conference Report for the statute, the annual reports are to analyze the leveraging of private assistance with federal funds and determine the impact of spending resources on the program’s investment areas, targeted populations, and qualified distressed communities. To date, the Fund has published two annual reports, the second of which contains an estimate of the private funding leveraged by the CDFI funding. This estimate is based on discussions with CDFIs and CDFI trade association representatives, not on financial data collected from the awardees. Anecdotal information from three of our six case study awardees indicates that the CDFI funding has assisted them in leveraging private funding. One awardee estimated that the Fund’s award generated more than three times its value in private investment. In part because it has been only 15 months since the Fund made its first investment in a CDFI, information on performance in the CDFI program is not yet available for a comprehensive evaluation of the program’s impact, such as the Conference Report envisions. The two annual reports include anecdotes about individuals served by awardees and general descriptions of awardees’ financial services and initiatives, but they do not evaluate the impact of the program on its investment areas, targeted populations, and qualified distressed communities. Satisfying this requirement will entail substantial research and analysis, as well as expertise in evaluation and time for the program’s results to unfold. Fund officials have acknowledged that their evaluation efforts must be enhanced, and they have planned or taken actions toward improvement. For instance, the Fund has developed preliminary program evaluation options, begun hiring staff to conduct or supervise the research and evaluations, and revised the assistance agreements for the 1997 awardees to require that they annually submit a report to assist the Fund in evaluating the program’s impact. However, because the Fund has not yet finished hiring its research and evaluation staff, it has not yet reached a final decision on what information it will require from the awardees to evaluate the program’s impact. The Fund also has to determine how it will integrate the results of awardees’ reported performance measurement or recent findings from related research into its evaluation plans. As to be expected, reports of accomplishments in the CDFI program are limited and preliminary. Because most CDFIs signed their assistance agreements between March 1997 and October 1997, the Fund has just begun to receive the required quarterly reports, and neither the Fund nor we have verified the information in them. Through February 1998, the Fund had received 41 quarterly reports from 19 CDFIs, including community development banks, community development credit unions, nonprofit loan funds, microenterprise loan funds, and community development venture capital funds. The different types of CDFIs support a variety of activities, whose results will be measured against different types of performance measures. Given the variety of performance measures for the different types of CDFIs, it is difficult to summarize the performance reported by the 19 CDFIs. To illustrate cumulative activity in the program to date, we compiled the data reported for the two most common measures—the total number of loans for both general and specific purposes and the total dollar value of these loans. According to these data, the 19 CDFIs made over 1,300 loans totaling about $52 million. In addition, the CDFIs reported providing consumer counseling and technical training to 480 individuals or businesses. In the BEA program, as of January 1998, about 58 percent of the banks had completed the activities for which they received the awards and the Fund had disbursed almost 80 percent of the $13.1 million awarded in fiscal year 1996. Despite this level of activity, the impact of the program on banks’ investments in distressed communities is difficult to assess. Our case studies of five awardees and interviews with Fund officials indicate that although the BEA awards encouraged some banks to increase their investments, other regulatory or economic incentives were equally or more important for other banks. In addition, more complete data on some banks’ investments are needed to guarantee that the increases in investments in distressed areas rewarded by the BEA program are not being offset by decreases in other investments in these distressed areas. The Fund has tried to measure the program’s impact by estimating the private investments leveraged through the BEA awards. However, this estimate includes banks’ existing, as well as increased, investments in distressed areas. Furthermore, the Fund cannot be assured that the banks’ increased investments remain in place because it does not require banks to report any material changes in these investments. Although the CDFI statute does not require awardees to reinvest their awards in community development, banks have reported to the Fund that they have done so, thereby furthering the BEA program’s objectives, according to the Fund. Finally, the Fund does not have a postaward evaluation system for assessing the impact of the program’s investments. Our analysis indicated that the impact of the BEA award varied at our five case study banks. One bank reported that it would not have made an investment in a CDFI without the prospect of receiving an award from the Fund. In addition, a CDFI Fund official told us that some CDFIs marketed the prospect of receiving a BEA award as an incentive for banks to invest in them. We found, however, that the prospect of an award did not influence other banks’ investment activity. For example, two banks received awards totaling over $324,000 for increased investments they had made or agreed to make before the fiscal year 1996 awards were made. Banks have multiple incentives for investing in CDFIs and distressed areas. Therefore, it is difficult to isolate the impact of the BEA award from the effects of other incentives; however the receipt of a BEA award is predicated on a bank’s increasing investments in community development. Discussions with our five case study banks indicated, however, that regulatory and economic incentives have a greater influence on these banks’ investments than the prospect of a BEA award. A reason that the banks frequently cited for investing in CDFIs and distressed areas was the need to comply with the Community Reinvestment Act (CRA). Economic considerations also motivated the banks. One bank said that such investments lay the groundwork for developing new markets, while other banks said that the investments help them maintain market share in areas targeted by the BEA program and compete with other banks in these areas. Two banks cited improved community relations as reasons for their investments. Some banks indicated that, compared with these regulatory and economic incentives, the BEA award provides a limited incentive, especially since it is relatively small and comes after a bank has already made at least an initial investment. According to Fund officials, a small portion of the 1996 awardees do not maintain the geographic data needed to determine whether any new investments in distressed areas are coming at the expense of other investments—particularly agricultural, consumer, and small business loans—in such areas. Concerned about the validity of the net increases in investments in distressed areas reported by awardees, the Fund required the 1996 awardees that did not maintain such data to certify that, to the best of their knowledge, they had not decreased investments in distressed areas that were not linked to their BEA award. While most banks maintain the data needed to track their investments by census tract and can thus link their investments with distressed areas, a few do not do so for all types of investments. In an attempt to measure an impact of the BEA program, the Fund has reported that awards of $13.1 million in 1996 leveraged over $125 million in private investment—a leveraging ratio of almost 10 to 1. This estimate includes banks’ existing investments in CDFIs and direct investments in distressed areas. When we included only the banks’ new direct investments, we calculated a leveraging ratio of 7 to 1. The Fund does not require awardees to notify the Fund of material changes in their investments after awards have been made. Therefore, it does not know how long investments made under the program remain in place. We found, for example, that a CDFI in which one of our case study banks had invested was dissolved several months after the bank received a BEA award. The CDFI later repaid a portion of the bank’s total investment. Because the Fund does not require banks to report their postaward activity, the Fund was not aware of this situation until we brought it to the attention of Fund officials. After hearing of the situation, a Fund official contacted the awardee and learned that the awardee plans to reinvest the funds in another CDFI. Even though this case has been resolved, Fund officials do not have a mechanism for determining whether investments made under the program remain in place. The CDFI statute does not require awardees to reinvest their awards in community development; however, awardees have reported to the Fund, and we found through our case studies, that many of them are reinvesting at least a portion of their awards in community development. Reinvestment in community development is consistent with the goals of the BEA program. While the Fund initially established reporting requirements for the 1996 awardees designed to assess the impact of their investments in CDFIs and distressed communities, it discontinued these requirements in 1997 when it found that the accomplishments reported by awardees could not be linked to outcomes in their communities. As a result, the Fund has no system in place for determining the program’s impact. As previously noted, accomplishments in community development are difficult to isolate and measure. For example, the effects of investment in community development may not be readily distinguishable from other influences and may not be observable for many years. Nevertheless, the banks we visited are using a variety of measures to assess the effects of their investments, some of which track accomplishments. Such measures include loan repayment rates and reports on the occupancy rates and financial performance of housing projects financed by the banks. However, the awardees are no longer required to report this information to the Fund. The CDFI Fund has more work to do before its strategic plan can fulfill the requirements of the Results Act. Though the plan covers the six basic elements required by the Results Act, these elements are generally not as specific, clear, and well linked as the act prescribes. However, the Fund is not unique in struggling to develop its strategic plan. We have found that federal agencies generally require sustained effort to develop the dynamic strategic planning processes envisioned by the Results Act. Difficulties that the Fund has encountered—in setting clear and specific strategic and performance goals, coordinating cross-cutting programs, and ensuring the capacity to gather and use performance and cost data—have faced many other federal agencies. Under the Results Act, an agency’s strategic plan must contain (1) a comprehensive mission statement; (2) agencywide strategic goals and objectives for all major functions and operations; (3) strategies, skill, and technologies and the various resources needed to achieve the goals and objectives; (4) a relationship between the strategic goals and objectives and the annual performance goals; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect the achievement of the strategic goals and objectives; and (6) a description of how program evaluations were used to establish or revise strategic goals and objectives and a schedule for future program evaluations. The Office of Management and Budget (OMB) has provided agencies with additional guidance on developing their strategic plans. In its strategic plan, the Fund states that its mission is “to promote economic revitalization and community development through investment in and assistance to community development financial institutions (CDFIs) and through encouraging insured depository institutions to increase lending, financial services and technical assistance within distressed communities and to invest in CDFIs.” Overall, the Fund’s mission statement generally meets the requirements established in the Results Act by explicitly referring to the Fund’s statutory objectives and indicating how these objectives are to be achieved through two core programs. Each agency’s strategic plan is to set out strategic goals and objectives that delineate the agency’s approach to carrying out its mission. The Fund’s strategic plan contains 5 goals and 13 objectives, with each objective clearly related to a specific goal. However, OMB’s guidance suggests that strategic goals and objectives be stated in a manner that allows a future assessment to determine whether they were or are being achieved. Because none of the 5 goals (e.g. to strengthen and expand the national network of CDFIs) and 13 objectives (e.g. increase the number of organizations in training programs) in the strategic plan include baseline dates and values, deadlines, and targets, the Fund’s goals and objectives do not meet this criterion. The act also requires that an agency’s strategic plan describe how the agency’s goals and objectives are to be achieved. OMB’s guidance suggests that this description address the skills and technologies, as well as the human, capital, information, and other resources, needed to achieve strategic goals and objectives. The Fund’s plan shows mixed results in meeting these requirements. On the positive side, it clearly lists strategies for accomplishing each goal and objective—establishing better linkages than the strategic plans of agencies that simply listed objectives and strategies in groups. On the other hand, the strategies themselves consist entirely of one-line statements. Because they generally lack detail, most are too vague or general to permit an assessment of whether their accomplishment will help achieve the plan’s strategic goals and objectives. For example, it is unclear how the strategy of “emphasizing high quality standards in implementing the CDFI program” will specifically address the objective of “strengthening and expanding the national network of CDFIs.” The Fund’s strategic plan lists 22 performance goals, which are clearly linked to specific strategic goals. However, the performance goals, like the Fund’s strategic goals and objectives, generally lack sufficient specificity, as well as baseline and end values. These details would make the performance goals more tangible and measurable. For example, one performance goal is to “increase the number of applicants in the BEA program.” This goal would be more useful if it specified the baseline number of applicants and projected an increase over a specified period of time. Also, some performance goals are stated more as strategies than as desired results. For example, it is not readily apparent how the performance goal of proposing legislative improvements to the BEA program will support the related strategic goal of encouraging investments in CDFIs by insured depository institutions. The Fund’s strategic plan only partially meets the requirement of the Results Act and of OMB’s guidance that it describe key factors external to the Fund and beyond its control that could significantly affect the achievement of its objectives. While the plan briefly discusses external factors that could materially affect the Fund’s performance, such as “national and regional economic trends,” these factors are not linked to specific strategic goals or objectives. The Results Act defines program evaluations as assessments, through objective measurement and objective analysis, of the manner and extent to which federal programs achieve intended objectives. Although the Fund’s plan does discuss various evaluation options, it does not discuss the role of program evaluations in either setting or measuring progress against all strategic goals. Also, the list of evaluation options does not describe the general scope or methodology for the evaluations, identify the key issues to be addressed, or indicate when the evaluations will occur. Our review of the Fund’s strategic plan also identified other areas that could be improved. For instance, OMB’s guidance on the Results Act directs that federal programs contributing to the same or similar outcomes should be coordinated to ensure that their goals are consistent and their efforts mutually reinforcing. The Fund’s strategic plan does not explicitly address the relationship of the Fund’s activities to similar activities in other agencies or indicate whether or how the Fund coordinated with other agencies in developing its strategic plan. Also, the capacity of the Fund to provide reliable information on the achievement of its strategic objectives at this point is somewhat unclear. Specifically, the Fund has not developed its strategic plan sufficiently to identify the types and the sources of data needed to evaluate its progress in achieving its strategic objectives. Moreover, according to a study prepared by KPMG Peat Marwick, the Fund has yet to set up a formal system, including procedures, to evaluate, continuously monitor, and improve the effectiveness of the management controls associated with the Fund’s programs. In closing, Mr. Chairman, our preliminary review has identified several opportunities for the Fund to improve the effectiveness of the CDFI and BEA programs and of its strategic planning effort. In our view, these opportunities exist, in part, because the Fund is new and is experiencing the typical growing pains associated with setting up an agency—particularly one that has the relatively complex and long-term mission of promoting economic revitalization and community development in low-income communities. In addition, staffing limitations have delayed the development of monitoring and evaluations systems. Recently, however, the Fund has hired several senior staff—including a director; two deputy directors, one of whom also serves as the chief financial officer; an awards manager; a financial manager; and program managers—and is reportedly close to hiring an evaluations director. While it is too early to assess the impact of filling these positions, the new managers have initiated actions to improve the programs and the strategic plan. Our report may include any recommendations or options we may have to further improve the operations of the CDFI Fund. We provided a copy of a draft of this testimony to the Fund for its review and comment. The Fund generally agreed with the facts presented and offered several clarifying comments, which we incorporated. We performed this review from September 1997 through May 1998 in accordance with generally accepted government auditing standards. Mr. Chairman, this concludes our testimony. We would be pleased to answer any questions that you or Members of the Committee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its ongoing review of the administration of the Community Development Financial Institutions (CDFI) Fund, focusing on the first year's performance of the CDFI and the Bank Enterprise Award (BEA) programs and opportunities for improving their effectiveness. GAO noted that: (1) as of January 1998, the Fund had entered into assistance agreements with 26 of the 31 CDFIs that received awards in 1996; (2) these agreements include performance goals and measures that were based on the business plans submitted by awardees in their application packages and negotiated between the Fund and the awardees, as the CDFI Act requires; (3) GAO found that the performance measures in the assistance agreements generally assess activities rather than the accomplishments reflecting the activities' results; (4) according to Fund officials and CDFIs in GAO's case studies, this emphasis on activity measures is due, in part, to difficulties in isolating and assessing the results of community development initiatives, which may not be observable for many years and may be subject to factors outside the awardees' control; (5) GAO further found that although the performance measures in the assistance agreements are generally related to specific goals, they do not always address the key aspects of the goals, and most assistance agreements lack baseline data that would facilitate tracking progress over time; (6) although the Fund has disbursed about 80 percent of the fiscal year 1996 BEA award funds, it is difficult to determine the extent to which the program has encouraged the 38 awardees to increase their investments in distressed communities; (7) GAO's case studies have five awardees and interviews with Fund officials indicate that although the prospect of receiving a BEA award prompted some banks to increase their investments, it had little or no effect on other banks; (8) GAO found that, in general, other regulatory or economic incentives exerted a stronger influence on banks' investments than the BEA award; (9) in addition, some banks do not collect all of the data on their activities needed to guarantee that increases in investments under the BEA program are not being offset by decreases in other investments in these distressed areas; (10) the CDFI Fund's strategic plan contains all of the elements required by the Government Performance and Results Act and the Office of Management and Budget's (OMB) associated guidance, but these elements generally lack the clarity, specificity, and linkage with one another that the act envisioned; and (11) although the plan identifies key external factors that could affect the Fund's mission, it does not relate these factors to the Fund's strategic goals and objectives and does not indicate how the Fund will take the factors into account when assessing awardee's progress toward goals.
Large banking organizations typically establish ongoing relationships with their corporate customers and evaluate the overall profitability of these relationships. They use company-specific information gained from providing certain products and services—such as credit or cash management—to identify additional products and services that customers might purchase. This practice, known as “relationship banking,” has been common in the financial services industry for well over a century. In recent years, as the legal and regulatory obstacles that limited banking organizations’ abilities to compete in securities and insurance activities have been eased, some large banking organizations have sought to expand the range of products and services they offer customers. In particular, some commercial banks have sought to decrease their reliance on the income earned from credit products, such as corporate loans, and to increase their reliance on fee-based income by providing a range of priced services to their customers. Federal Banking Regulators The Federal Reserve and OCC are the federal banking regulators charged with supervising and regulating large commercial banks. The Federal Reserve has primary supervisory and regulatory responsibilities for bank holding companies and their nonbank and foreign subsidiaries and for state-chartered banks that are members of the Federal Reserve System and their foreign branches and subsidiaries. The Federal Reserve also has regulatory responsibilities for transactions between member banks and their affiliates. OCC has primary supervisory and regulatory responsibilities for the domestic and foreign activities of national banks and their subsidiaries. OCC also has responsibility for administering and enforcing standards governing transactions between national banks and their affiliates. Among other activities, the Federal Reserve and OCC conduct off-site reviews and on-site examinations of large banks to provide periodic analysis of financial and other information, provide ongoing supervision of their operations, and determine compliance with banking laws and regulations. Federal Reserve and OCC examinations are intended to assess the safety and soundness of large banks and identify conditions that might require corrective action. Congress added section 106 to the Bank Holding Company Act in 1970 to address concerns that an expansion in the range of activities permissible for bank holding companies might give them an unfair competitive advantage because of the unique role their bank subsidiaries served as credit providers.Section 106 makes it unlawful, with certain exceptions, for a bank to extend credit or furnish any product or service, or vary the price of any product or service (the “tying product”) on the “condition or requirement” that the customer obtains some additional product or service from the bank or its affiliate (the “tied product”). Under section 106, it would be unlawful for a bank to provide credit (or to vary the terms for credit) on the condition or requirement that the customer obtain some other product from the bank or an affiliate, unless that other product was a traditional bank product.Thus, it would be unlawful for a bank to condition the availability or pricing of new or renewal credit on the condition that the borrower purchase a nontraditional bank product from the bank or an affiliate. In contrast, section 106 does not require a bank to extend credit or provide any other product to a customer, as long as the bank’s decision was not based on the customer’s failure to satisfy a condition or requirement prohibited by section 106. For example, it would be lawful for a bank to deny credit to a customer on the basis of the customer’s financial condition, financial resources, or credit history, but it would be unlawful for a bank to deny credit because the customer failed to purchase underwriting services from the bank’s affiliate. Section 106 does not prohibit a bank from cross-marketing products that are not covered by the “traditional banking product” exemption or from granting credit or providing any other product or service to a customer based solely on the hope that the customer obtain additional products from the bank or its affiliates in the future, provided that the bank does not require the customer to purchase an additional product. Also, section 106 generally does not prohibit a bank from conditioning its relationship with a customer on the total profitability of its relationship with the customer. Section 106 authorizes the Federal Reserve to make exceptions that are not contrary to the purposes of the tying prohibitions. The Federal Reserve has used this authority to allow banks to offer broader categories of packaging arrangements, where it has determined that, these arrangements benefit customers and do not impair competition. In 1971, the Federal Reserve adopted a regulation that extended antitying rules to bank holding companies and their nonbank affiliates and approved a number of nonbanking activities that these entities could engage in under the Bank Holding Company Act. Citing the competitive vitality of the markets in which nonbanking companies generally operate, in February 1997, the Federal Reserve rescinded this regulatory extension. At the same time, the Federal Reserve expanded the traditional bank products exception to include traditional bank products offered by nonbank affiliates. In the mid-1990s, the Board also added two regulatory safe harbors. First, the Board granted a regulatory safe harbor for combined-balance discount packages, which allowed a bank to vary the consideration for a product or package of products—based on a customer’s maintaining a combined minimum balance in certain products—as long as the bank offers deposits, the deposits are counted toward the combined-balance, and the deposits count at least as much as nondeposit products toward the minimum balance. Furthermore, according to the Board, under the combined- balance safe harbor, the products included in the combined balance program may be offered by either the bank or an affiliate, provided that the bank specifies the products and the package is structured in a way that does not, as a practical matter, obligate a customer to purchase nontraditional bank products to obtain the discount. Second, the Board granted a regulatory safe harbor for foreign transactions. This safe harbor provides that the antitying prohibitions of section 106 do not apply to transactions between a bank and a customer if the customer is a company that is incorporated, chartered, or otherwise organized outside of the United States, and has its principal place of business outside of the United States, or if the customer is an individual who is a citizen of a country other than the United States and is not resident in the United States. On August 29, 2003, the Board published for public comment its proposed interpretation and supervisory guidance concerning section 106. In this proposed interpretation, the Federal Reserve noted that determining whether a violation of section 106 occurred requires a detailed understanding of the facts underlying the transaction in question. In this proposed interpretation, the Federal Reserve also noted what it considers to be the two key elements of a violation of section 106: (1) The arrangement must involve two or more separate products: the customer‘s desired product(s) and one or more separate tied products; and (2) The bank must force the customer to obtain (or provide) the tied product(s) from (or to) the bank or an affiliate in order to obtain the customer‘s desired product(s) from the bank. A transaction does not violate section 106 unless it involves two separate products or services. For example, a bank does not violate section 106 by requiring a prospective borrower to provide the bank specified collateral to obtain a loan or by requiring an existing borrower to post additional collateral as a condition for renewing a loan. Assuming two products or services are involved, the legality of the arrangement depends on, among other things, which products and services are involved and in what combinations. It would be unlawful for a bank to condition the availability of corporate credit on a borrower’s purchase of debt underwriting services from its affiliate, because a bank cannot condition the availability of a bank product on a customer’s purchase of a nontraditional product or service. According to the Board’s proposed interpretation, a bank can legally condition the availability of a bank product, such as credit, on the customer’s selection from a mix of traditional and nontraditional products or services—a mixed-product arrangement—only if the bank offered the customer a “meaningful choice” of products that includes one or more traditional bank products and did not require the customer to purchase any specific product or service. For example, according to the Federal Reserve, a bank could legally condition the availability of credit on a customer’s purchase of products from a list of products and services that includes debt underwriting and cash management services, provided that this mixed- product arrangement contained a meaningful option to satisfy the bank’s condition solely through the purchase of the traditional bank products included in the arrangement. However, it would be a violation of section 106 for a bank to condition the availability of credit on a mixed-product arrangement that did not contain a meaningful option for the customer to satisfy the bank’s condition solely through the purchase of a traditional bank product. When a bank offers a customer a low price on credit, it might or might not be a violation of law. If a bank reduced the cost of credit on the condition that the customer purchase nontraditional bank products or services offered by its investment affiliate, this arrangement would violate section 106. However, if a bank offered a low price on credit to attract additional business but did not condition the availability of the price on the purchase of a prohibited product, it would not violate section 106. Additionally, if a reduced interest rate were to constitute underpricing of a loan, such a transaction, depending on the circumstances, could violate section 23B of the Federal Reserve Act of 1913, which we discuss later in this section. Whether the arrangement constitutes an unlawful tie under section 106 also depends upon whether a condition or requirement actually exists and which party imposes the condition or requirement. Determining the existence of either element can be difficult. The question of whether a condition or requirement exists is particularly difficult because of uncertainties about how to interpret that aspect of the prohibition. According to the Board’s proposal, section 106 applies if two requirements are met: “(1) a condition or requirement exists that ties the customer’s desired product to another product; and (2) this condition or requirement was imposed or forced on the customer by the bank.” Thus, according to the Board’s proposal, if a condition or requirement exists, further inquiry may be necessary to determine whether the condition or requirement was imposed or forced on the customer by the bank: “If the condition or requirement resulted from coercion by the bank, then the condition or requirement violates section 106, unless an exemption is available for the transaction.” This interpretation is not universally accepted, however. As the Board’s proposal has noted, some courts have held that a tying arrangement violates section 106 without a showing that the arrangement resulted from any type of coercion by the bank. Uncertainties about the proper interpretation of the “condition or require” provision of section 106 have lead to disagreement over the circumstances that violate section 106. It has been suggested that changes in financial markets that have occurred since the enactment of section 106, particularly a decreased corporate reliance on commercial bank loans, also are relevant in considering whether banks currently can base credit decisions on a “condition or requirement” that corporate customers buy other services. At the end of 1970, according to the Federal Reserve’s Flow of Funds data, bank loans accounted for about 24 percent of the total liabilities of U.S. nonfarm, nonfinancial corporations. At the end of 2002, bank loans accounted for about 14 percent of these liabilities. Because section 106 applies only to commercial and savings banks, investment banks and insurance companies, which compete in credit markets with banks, are not subject to these tying restrictions. Thus, under section 106, a bank’s nonbank affiliate legally could condition the availability of credit from that nonbank affiliate on a customer’s purchase of debt underwriting services. Where a transaction involves a bank as well as one or more affiliates, uncertainties could exist over whether the affiliate or the bank imposed a condition or requirement. It should be noted, however, that all of these financial institutions are subject to the more broadly applicable antitrust laws, such as the Sherman Act, that prohibit anticompetitive practices, including tying arrangements. In addition, under section 106 it is lawful for bank customers to initiate ties. For example, a customer could use its business leverage to obtain favorable credit terms or require a bank to extend a corporate loan as a condition for purchasing debt underwriting services. Section 23B requires that transactions involving a bank and its affiliates, including those providing investment-banking services, be on market terms. Although section 106 generally prohibits changing the price for credit on the condition that the customer obtain some other services from the bank or its affiliates, section 23B prohibits setting the price for credit at a below-market rate that would reduce the bank’s income for the benefit of its affiliate. Banking regulators have noted that pricing credit at below- market rates could also be an unsafe and unsound banking practice independent of whether the practice violates section 23B specifically. Some corporate borrowers alleged that commercial banks unlawfully tie the availability of credit to the borrower’s purchase of other financial services, including debt underwriting services from their banks’ investment affiliates. Because banks, in certain circumstances, may legally condition the availability of credit on the borrower’s purchase of other products, some of these allegations of unlawful tying could be invalid. Substantiating charges of unlawful tying, if it occurs, can be difficult because, in most cases, credit negotiations are conducted orally and thus generate no documentary evidence to support borrowers’ allegations. Thus, banking regulators may have to obtain other forms of indirect evidence to assess whether banks unlawfully tie products and services. Although customer information could have an important role in helping regulators enforce section 106, regulators do not have a specific mechanism to solicit information from corporate bank customers on an ongoing basis. The results of a 2003 survey of financial executives, interviews that we conducted with corporate borrowers, and several newspaper articles, suggest that commercial banks frequently tie access to credit to the purchase of other financial services, including bond underwriting, equity underwriting, and cash management. The Association for Financial Professionals reported that some respondents to their survey of financial executives at large companies (those with revenues greater than $1 billion) claimed to have experienced the denial of credit or a change in terms after they did not award a commercial bank their bond underwriting business. In our interviews with corporate borrowers, one borrower said that a commercial bank reduced the borrower’s amount of credit by $70 million when the borrower declined to purchase debt underwriting services from the bank’s investment affiliate. In addition, several newspapers and other publications have also reported instances where corporate borrowers have felt pressured by commercial banks to purchase products prohibited under section 106 for the customers to maintain their access to credit. In these reports, corporate borrowers have described negotiations where, in their views, bankers strongly implied that future lending might be jeopardized unless they agreed to purchase additional services, such as underwriting, from the banks’ investment affiliates. However, none of these situations resulted in the corporate borrower complaining to one of the banking regulators. In its Special Notice to its members, NASD also noted the Association for Financial Professional survey. The notice cautioned that NASD regulations require members to conduct business in accordance with just and equitable principles of trade and that it could be a violation of these rules for any member to aid and abet a violation of section 106 by an affiliated commercial bank. NASD is conducting its own investigation into these matters. At the time of our renew, NASD had not publicly announced any results of its ongoing investigation. Corporate borrowers might be unaware of the subtle distinctions that make some tying arrangements lawful and others unlawful. Borrowers, officials at commercial banks, and banking regulators said that some financial executives might not be familiar with the details of section 106. For example, some borrowers we interviewed thought that banks violated the tying law when they tied the provision of loan commitments to borrowers’ purchases of cash management services. However, such arrangements are not unlawful, because, as noted earlier, section 106 permits banks to tie credit to these and other traditional bank services. The legality of tying arrangements might also hinge on the combinations of products that the borrowers are offered. For example, recently proposed Federal Reserve guidance suggested that a bank could legally condition the availability of credit on the purchase of other products services, including debt underwriting, if the customer has the meaningful choice of satisfying the condition solely through the purchase of one or more additional traditional bank products. Corporate borrowers said that because the credit arrangements are made orally, they lack the documentary evidence to demonstrate unlawful tying arrangements in those situations where they believe it has occurred. Without such documentation, borrowers might find it difficult to substantiate such claims to banking regulators or seek legal remedies. Moreover, with few exceptions, complaints have not been brought to the attention of the banking regulators. Some borrowers noted that they are reluctant to report their banks’ alleged unlawful tying practices because they lack documentary evidence of such arrangements and uncertainty about which arrangements are lawful or unlawful under section 106. Borrowers also noted that a fear of adverse consequences on their companies’ future access to credit or on their individual careers contributed to some borrowers’ reluctance to file formal complaints. Because documentary evidence demonstrating unlawful tying might not be available in bank records, regulators might have to look for other forms of indirect evidence, such as testimonial evidence, to assess whether banks unlawfully tie products and services. The guidance that the federal banking regulators have established for their regular examinations of banks calls for examiners to be alert to possible violations of law, including section 106. These examinations generally focus on specific topics based on the agencies’ assessments of the banks’ risk profiles, and tying is one of many possible topics. In response to recent allegations of unlawful tying at large commercial banks, the Federal Reserve and OCC conducted a special targeted review of antitying policies and procedures at several large commercial banks and their holding companies. The banking regulators focused on antitying policies and procedures; interviewed bank managers responsible for compliance, training, credit pricing, and internal audits; and also reviewed credit pricing policies, relationship banking policies, and the treatment of customer complaints regarding tying. The review did not include broadly based testing of transactions that included interviews with corporate borrowers. The regulators said that they met with officials and members of a trade group representing corporate financial executives. The banking regulators found that banks covered in the review generally had adequate controls in place. With limited exceptions, they did not detect any unlawful combinations or questionable transactions. The examiners did, however, identify variation among the banks in interpreting section 106, some of which was not addressed in the regulatory guidance then available. As a result of the findings of the special targeted review, on August 29, 2003, the Federal Reserve released for public comment proposed guidance to clarify the interpretation of section 106 for examiners, bankers, and corporate borrowers. Federal Reserve officials said that they hope that the guidance encourages customers to come forward if they have complaints. As part of their routine examination procedures, the Federal Reserve and OCC provide instructions for determining compliance with section 106.During the course of these examinations, examiners review banks’ policies, procedures, controls, and internal audits. Exam teams assigned to the largest commercial banks continually review banks throughout the year, and in several cases, the teams are physically located at the bank throughout the year. The Federal Reserve and OCC expect examiners to be alert to possible violations of section 106 of the Bank Holding Company Act Amendments of 1970 and section 23B of the Federal Reserve Act and to report any evidence of possible unlawful tying for further review. Regular bank examinations in recent years have not identified any instances of unlawful tying that led to enforcement actions. Federal Reserve officials told us, however, that if an examiner had tying-related concerns about a transaction that the bank’s internal or external legal counsel had reviewed, examiners deferred to the bank’s legal analysis and verified that the bank took any appropriate corrective actions. Federal Reserve officials also said that legal staffs at the Board and the District Reserve Banks regularly receive and answer questions from examiners regarding the permissibility of transactions. In a 1995 bulletin, OCC reminded national banks of their obligations under section 106 and advised them to implement appropriate systems and controls that would promote compliance with section 106. Along with examples of lawful tying arrangements, the guidance also incorporated suggested measures for banks’ systems and controls, and audit and compliance programs. Among the suggested measures were training bank employees about the tying provisions, providing relevant examples of prohibited practices, and reviewing customer files to determine whether any extension of credit was conditioned unlawfully on obtaining another nontraditional product or service from the bank or its affiliates. In addition to reviewing banks’ policies, procedures, and internal controls, examiners also review aggregate data on a bank’s pricing of credit products. OCC officials noted that instances of unlawfully priced loans or credit extended to borrowers who were not creditworthy could alert examiners to potential unlawful tying arrangements. However, Federal Reserve officials pointed out that examiners typically do not focus on a banks’ pricing of individual transactions because factors that are unique to the bank and its relationship with the customer affect individual pricing decisions. They said that examiners only conduct additional analyses if there was an indication of a potential problem within the aggregated data. In recent years, banking regulators’ examination strategies have moved toward a risk-based assessment of a bank’s policies, procedures, and internal controls, and away from the former process of transaction testing. The activities judged by the regulatory agencies to pose the greatest risk to a bank are to receive the most scrutiny by examiners under the risk-based approach, and transaction testing is generally intended to validate the use and effectiveness of risk-management systems. The effectiveness of this examination approach, however, depends on the regulators’ awareness of risk. In the case of tying, the regulators are confronted with the disparity between frequent allegations about tying practices and few, if any, formal complaints. Further, the examiners generally would not contact customers as part of the examinations and thus would have only limited access to information about transactions or the practices that banks employ in managing their relationships with customers. In response to the controversy about allegations of unlawful tying, in 2002 the Federal Reserve and OCC conducted joint reviews targeted to assessing antitying policies and procedures at large commercial banks that, collectively, are the dominant syndicators of large corporate credits. The Federal Reserve and OCC exam teams found limited evidence of potentially unlawful tying in the course of the special targeted review. For example, one bank’s legal department uncovered one instance where an account officer proposed an unlawful conditional discount. The officer brought this to the attention of the legal department after the officer attended antitying training. The customer did not accept the offer, and no transaction occurred. In addition, the teams noted that the commercial bank’s interpretation of section 106 permitted some activities that the teams questioned; one of the banks reversed a transaction in response to Federal Reserve or OCC questions. Attorneys on the exam teams reviewed documents regarding lawsuits alleging unlawful tying, but they found that none of the suits contained allegations that warranted any follow-up. For example, they found that some of the suits involved customers who were asserting violations of section 106 as a defense to the bank’s efforts to collect on loans and that some of the ties alleged in the suits involved ties to traditional bank products, which are exempted from section 106. Federal Reserve and OCC officials noted that it would be unusual to find a provision in a loan contract or other loan documentation containing an unlawful tie. Some corporate borrowers said that there is no documentary evidence because banks only communicate such conditions on loans orally. According to members of the review team, they did not sample transactions during the review because past reviews suggested that this would probably not produce any instances of unlawful tying practices. The targeted review did include contacting some bank customers to obtain information on specific transactions. The Federal Reserve noted that without examiners being present during credit negotiations, there is no way for examiners to know what the customer was told. Given the complex nature of these transactions, the facts and circumstances could vary considerably among individual transactions. Federal Reserve officials, however, noted that customer information could play an important role in enforcing the law, because so much depends on whether the customer voluntarily agreed with the transaction or was compelled to agree with the conditions imposed by the bank. As the officials noted, this determination cannot be made based solely on the loan documentation. During the targeted review, Federal Reserve and OCC officials found that all of the banks they reviewed generally had adequate procedures in place to comply with section 106. All banks had specific antitying policies, procedures, and training programs in place. The policies we reviewed from two banks encouraged employees to consult legal staff for assistance with arrangements that could raise a tying-related issue. According to the Federal Reserve and OCC, at other banks, lawyers reviewed all transactions for tying-related issues before they were completed. The training materials we reviewed from two banks included examples that distinguished lawful from unlawful tying arrangements. Banking regulators noted that some banking organizations had newly enhanced policies, procedures, and training programs as a result of recent media and regulatory attention. However, examiners also found that the oversight by internal audit functions at several banks needed improvement. In one case, they found that bank internal auditors were trained to look for the obvious indications of tying, but that banks’ audit procedures would not necessarily provide a basis to detect all cases of tying. For example, recent antitying training programs at two banks helped employees identify possible tying violations. Officials at one large banking organization also said that banks’ compliance efforts generally are constrained by the inability to anticipate every situation that could raise tying concerns. They also noted that banks could not monitor every conversation that bank employees had with customers, and thus guarantee that mistakes would never occur. In addition, examiners were concerned that certain arrangements might cause customer confusion when dealing with employees who work for both the bank and its investment affiliate. In those cases, it could be difficult to determine whether the “dual” employee was representing the bank or its affiliate for specific parts of a transaction. However, the examiners noted that in the legal analysis of one banking organization, the use of such dual employees was not necessarily problematic, given that the tie was created by the investment affiliate, rather than the bank, and that section 106 addresses the legal entity involved in a transaction and not the employment status of the individuals involved. Proposed Federal Reserve guidance did not add clarification to this matter beyond emphasizing the importance of training programs for bank employees as an important internal control. “the determination of whether a violation of section 106 has occurred often requires a careful review of the specific facts and circumstances associated with the relevant transaction (or proposed transaction) between the bank and the customer.” Customers could provide information on the facts and circumstances associated with specific transactions and provide a basis for testing whether the bank actions were in compliance with its policies and procedures. If the banks’ actions are not consistent with their policies and procedures, there could be violations of section 106. A review of the transactions would provide direct evidence of compliance or noncompliance with section 106. Further, information from analysis of transactions and information obtained from customers could provide the bank regulatory agencies with more information on the circumstances where there could be a greater risk of tying, contributing to their risk-based examination strategies. The examiners and attorneys participating in the targeted review found variations in banks’ interpretation of section 106 in areas where authoritative guidance was absent or incomplete at the time of the review. One interpretative issue was the extent to which a bank could consider the profitability of the overall customer relationship in making credit decisions, particularly whether a bank could consider a customer’s use of nontraditional banking services in deciding to terminate the customer relationship without violating section 106. This issue also encompassed the appropriateness of the language that a bank might use when entering into or discontinuing credit relationships—including whether a bank could appropriately use language implying the acceptance of a tied product in a letter formalizing a commitment for a loan and communication protocols that a bank might use to disengage clients who did not meet internal profitability targets. Examiners found that all banks in the joint targeted review had undergone a “balance sheet reduction,” disengaging from lending relationships with their least desirable customers. An official at one commercial bank acknowledged that, when banks discontinue relationships, their decision might appear to be unlawful tying from the perspective of the customer. However, it would not be unlawful for a bank to decline to provide credit to a customer as long as the bank’s decision was not based on the customer’s failure to satisfy a condition or requirement prohibited by section 106. Examiners questioned whether it would be appropriate for a banking organization to provide both a bridge loan and securities underwriting to vary the amount of fees it charged for services that would normally be done independently for each service. For example, a bank conducting a credit analysis for both commercial and investment banking services and reducing the overall fees to only include one credit analysis might raise tying considerations. Banks and their outside counsels believed that this price reduction would be appropriate. However, the Federal Reserve staff said that whether or not a price reduction would be appropriate would depend on the facts and condition of the transaction, including whether or not the bank offered the customer the opportunity to obtain the discount from the bank separately from the tied product. Examiners were also concerned that some bank transactions might appear to circumvent section 106. For example, the examiners found one instance in which a nonbank affiliate had tied bridge loans to the purchase of securities underwriting and syndicated some or all of the loans to its commercial bank. The examiners noted that although this issue had not been addressed in the guidance available at the time, this arrangement created the appearance of an attempt to circumvent the application of section 106. The bank thereupon discontinued the practice. As mentioned previously, because section 106 applies only to banks,it is not a violation of the section for most nonbank affiliates of commercial banks to tie together any two products or services. The banks thereupon discontinued the practice. The proposed interpretation of section 106 recently issued by the Federal Reserve addresses this issue. Finally, the examiners found that one bank might be overstating the relief gained from the foreign transactions safe harbor. The Federal Reserve adopted a safe harbor from the antitying rules for transactions with corporate customers that are incorporated or otherwise organized and that have their principal place of business outside the United States. This safe harbor also applies to individuals who are citizens of a foreign country and are not resident in the United States. However, the new guidance developed by the banking regulators does not address the examiners’ specific concerns. Federal Reserve officials said that a general rule on these issues would not be feasible and that any determinations would depend on the facts and circumstances of the specific transactions. Based on the interpretive issues examiners found during the special targeted review and its analysis, and after significant consultation with OCC, the Federal Reserve recently released for public comment a proposed interpretation of section 106. The proposed interpretation noted that the application of section 106 is complicated and heavily dependent on the particular circumstances and facts of specific transactions. The proposed guidance outlines, among other things, some of the information that would be considered in determining whether a transaction or proposed transaction would be lawful or unlawful under section 106. Federal Reserve officials also have noted that another desired effect of additional guidance could be providing bank customers a better understanding of section 106 and what bank actions are lawful. The officials also said that they hoped that the new guidance would encourage customers to come forward with any complaints. The deadline for public comments on the proposed guidance was September 30, 2003. At the time of our review, the Federal Reserve was reviewing comments that had been received. Although officials at one investment bank contended that large commercial banks deliberately “underpriced”—or priced credit at below market rates— corporate credit to attract underwriting business to their investment affiliates, the evidence of “underpricing” is ambiguous and subject to different interpretations. They claimed that these commercial banks underprice credit in an effort to promote business at the banks’ investment affiliates, which would increase the bank holding companies’ fee-based income. Such behavior, they contended, could indicate violations of section 106, with credit terms depending on the customer buying the tied product. The banking regulators also noted that pricing credit below market interest rates, if it did occur, could violate section 23B, with the bank’s income being reduced for the benefit of its investment affiliate. Commercial bankers counter that the syndication of these loans and loan commitments—the sharing of them among several lenders—makes it impossible to underprice credit, since the other members of the syndicate would not participate at below market prices. Federal Reserve staff is considering further research into the issue of loan pricing, which could clarify the issue. Investment bankers and commercial bankers also disagreed whether differences between the prices for loans and loan commitments and those for other credit products indicated that nonmarket forces were involved in setting credit prices. Both investment bankers and commercial bankers cited specific transactions to support their contentions; in some cases, they pointed to the prices for the same loan products at different times. Commercial bankers also noted that their business strategies called for them to ensure the profitability of their relationship with customers; if market-driven credit prices alone did not provide adequate profitability, the strategies commonly called for marketing an array of other products to make the entire relationship a profitable one. The banking regulators noted that such strategies would be within the bounds of the law as long as the bank customers had a “meaningful choice” that includes traditional bank products. In recent years, the market share of the fees earned from debt and equity underwriting has declined at investment banks and grown at investment affiliates of commercial banks. In 2002, the three largest investment banks had a combined market share of 31.9 percent, a decline from a 38.1 percent market share that these investment banks held in 1995. In comparison, the market share of the three largest investment affiliates of commercial banks was 30.4 percent in 2002, compared with their 17.8 percent market share that these investment banks held in 1995. Some of this growth might be the result of the ability of commercial banks and their investment affiliates to offer a wide array of financial services. However, banking regulators noted that industry consolidation and the acquisition of investment banking firms by bank holding companies also has been a significant factor contributing to this growth. For example, regulators noted that Citigroup Inc. is the result of the 1998 merger of Citicorp and Travelers Group Inc., which combined Citicorp’s investment business with that of Salomon Smith Barney, Inc., a Travelers subsidiary that was already a prominent investment bank. J.P. Morgan & Co. Incorporated and The Chase Manhattan Corporation also combined in 2000 to form J.P. Morgan Chase & Co. Some investment bankers contended that commercial banks offer loans and loan commitments to corporate borrowers at below-market rates if borrowers agree to engage the services of their investment affiliates. Large loans and loan commitments to corporations—including the lines of credit that borrowers use in conjunction with issuing commercial paper—are frequently syndicated. A syndicated loan is financing provided by a group of commercial banks and investment banks whereby each bank agrees to advance a portion of the funding. Commercial bankers contended that these prices of the loans and loan commitments reflected a competitive market, where individual lenders have no control over prices. Officials from one investment bank who contended that banking organizations have underpriced credits to win investment banking business drew comparisons between the original pricing terms of specific syndicated loans and the pricing of the same loans in the secondary market. Specifically, they pointed to several transactions, including one in which they questioned the pricing but participated because the borrower insisted that underwriters provide loan commitments. The investment bank officials said that when they subsequently attempted to sell part of their share of the credits, the pricing was unattractive to the market and that they could not get full value. In one case, they noted that the credit facility was sold in the secondary market at about 93 cents on the dollar shortly after origination. They said that, in their opinion, this immediate decline in value was evidence that the credit facility had been underpriced at origination. Commercial bankers said that competition in the corporate loan market determines loan pricing. One banker said that if a loan officer overpriced a loan by even a basis point or two the customer would turn to another bank.Bankers also noted that if loans were underpriced, the syndicators would not be able to syndicate the loan to investors who are not engaged in debt underwriting and insist on earning a competitive return. An official from one commercial bank provided data on its syndicated loans, showing that a number of the participants in the loans and loan commitments did not participate in the associated securities underwriting for the borrower and—in spite of having no investment banking business to win—found the terms of the loans and loan commitments attractive. However, we do not know the extent, if any, to which these other participants might have had other revenue-generating business with the borrowers. Officials from a commercial bank and loan market experts also said that the secondary market for loans was illiquid, compared with that for most securities. The bank officials said that therefore prices could swing in response to a single large sale as a result of this illiquidity. Officials from one commercial bank said that the price of the loan to which investment bank officials referred, which had sold for about 93 cents on the dollar shortly after origination, had risen to about 98 cents on the dollar in secondary trades a few months later. These officials said that, in their opinion, this return in pricing toward the loan’s origination value is proof that the syndicated loan was never underpriced and that the movement in price was the result of a large portion of the facility being sold soon after the origination. Independent loan market experts also observed that trading in loan commitments is illiquid, and thus subsequent price fluctuations might not reflect fair value. Commercial bankers and investment bankers disagreed on whether a comparison of the prices of loans and other credit products demonstrated underpricing. In particular, one key disagreement involved the use of credit default swaps. Banks and other financial institutions can use credit default swaps, among other instruments, to reduce or diversify credit risk exposures. With a credit default swap, the lender keeps the loan or loan commitment on its books and essentially purchases insurance against borrower default. Officials at one investment bank compared the prices of syndicated loans with the prices of credit default swaps used to hedge the credit risk of the loan. In their view, the differences in the two prices demonstrated that commercial banks underpriced corporate credit. They provided us with several examples of syndicated loans, wherein the difference between the interest rate on the loan or loan commitment and the corresponding credit default swap was so great that the investment bankers believed that the bank would have earned more from insuring the credit than extending it. On the other hand, Federal Reserve officials, commercial bankers, and loan market experts disputed the extent to which the pricing of corporate credit could be compared with their corresponding credit default swaps, because of important differences between the two products and between the institutions that dealt in them. Officials from the Federal Reserve noted that the triggering mechanisms for the two products differed. Although the trigger for the exercise of a credit default swap is a clearly defined indication of the borrower’s credit impairment, the exercise of a commercial paper back-up line is triggered by the issuer’s inability to access the commercial paper market—an event that could occur without there necessarily being any credit impairment of the issuer. For example, in 1998, Russia’s declaration of a debt moratorium and the near-failure of a large hedge fund created financial market turmoil; since this severely disrupted corporations’ issuance of bonds and commercial paper, they drew on their loan commitments from banks. In addition, loan market experts and officials from a commercial bank also said that the loan market and the credit default swap market involve different participants with different motivations. Loan market experts noted that lead originators of loans and loan commitments have an advantage gained from knowledge of the borrower through direct business relationships. On the other hand, those who provide credit protection by selling credit swap might be entities with no direct knowledge of the customer’s creditworthiness, but use these instruments for diversifying risks. To present their differing positions on whether or not credit is underpriced, investment bankers, loan market experts, and commercial bankers discussed the pricing of selected syndicated loan commitments. In syndicated loan commitments, participants receive commitment fees on the undrawn amount and a specified interest rate if the loan is drawn. In addition, participants in syndicated loan commitments are protected from certain risks by various conditions. Also, the lead participant might receive an up-front fee from the borrower. Each of these factors can influence the price of the loan commitment. Officials at one investment bank noted that the pricing for undrawn loan commitments provided as back-up lines for commercial paper issuers have been low for several years and had been relatively stable, even when other credit market prices fluctuated. Available data showed that this was the case for the fees for undrawn commitments provided for investment-grade borrowers, with undrawn fees averaging under 0.10 percent per year of the undrawn amount. The investment bankers further noted that the loan commitment would be drawn in the event of adverse conditions for the borrower in the commercial paper market. Thus, commercial paper back- up lines exposed the provider to the risk that they might have to book loans to borrowers when they were no longer creditworthy. In the opinion of these investment bankers, the low undrawn loan fees do not reflect this risk. In contrast, officials from commercial banks and loan market experts said that the level of undrawn fees for loan commitments did not represent all the ways that commercial banks might adjust credit terms to address rising credit risk. These officials said that in response to perceived weakening in credit quality, lenders had shortened the maturity of credit lines. Lenders also tightened contract covenants to protect themselves against a borrowers’ potential future weakening. In addition, commercial bank officials told us that other factors were involved in the pricing of loan commitments. For example, they said that a comprehensive analysis should include the upfront fees to measure the total return on undrawn loan commitments. However, loan market experts said that published loan pricing data do not include the up-front fees that many banks collect when they extend credit. Thus, publicly available information was insufficient to indicate the total return commercial banks received on such lending. Officials at one investment bank claimed that because the fees that commercial banks receive for corporate credits barely exceed their cost of funds, commercial banks are not covering all of their costs and are in essence subsidizing corporate credits. Conversely, several bankers said that the rates they can charge on corporate credits do exceed their cost of funds but are not always high enough to allow them to meet their institution’s profitability targets. Officials at one commercial bank noted that their internal controls included separation of powers, where any extensions of credit over $10 million would have to be approved by a credit committee rather than those responsible for managing the bank’s customer relationships. However, these same officials said that they often base lending decisions on the profitability of customer relationships, not individual products. Thus, a loan that might not reach profitability targets on a stand-alone basis could still be attractive as part of an overall customer relationship. During our review, members of the Federal Reserve’s staff said that they were considering conducting research into pricing issues in the corporate loan market. Such research could shed some additional light on the charges of the investment bankers and the responses of the commercial bankers.It also could provide useful supervisory information. If the study finds indications that pricing of credit to customers who also use underwriting services is lower than other comparable credit, this could lend support to the investment banker’s allegations of violations of section 23B. However, if the charges are not valid and credit pricing does reflect market conditions, this information would serve as useful confirmation of the findings of the Federal Reserve-OCC targeted review, which found that the policies and procedures of the largest commercial banks served as effective deterrents against unlawful tying. Based on our analysis, the different accounting methods, capital requirements, and levels of access to the federal safety net did not appear to give commercial banks a consistent competitive advantage over investment banks. Officials at some investment banks asserted that these differences gave commercial banks an unfair advantage that they could use in lending to customers who also purchase debt-underwriting services from their investment affiliates. Under current accounting rules, commercial banks and investment banks are required to use different accounting methods to record the value of loan commitments and loans. Although these different methods could cause temporary differences in the financial statements for commercial banks and financial banks. While these different methods could cause temporary differences in financial statements, these differences would be reconciled at the end of the credit contract periods. Further, if the loan commitment were exercised and both firms either held the loan until maturity or made the loan available for sale, the accounting would be similar and would not provide an advantage to either firm. Additionally, while commercial and investment banks were subject to different regulatory capital requirements, practices of both commercial and investment banks led to avoidance of regulatory charges on loan commitments with a maturity of 1 year or less. Moreover, while the banks had different levels of access to the federal safety net, some industry observers argued that greater access could be offset by corresponding greater regulatory costs. According to FASB, which sets the private sector accounting and reporting standards, commercial banks and investment banks follow different accounting models for similar transactions involving loans and loan commitments. Most commercial banks follow a mixed model, where some financial assets and liabilities are measured at historical cost, some at the lower of cost or market value and some at fair value. In contrast, some investment banks follow a fair-value accounting model, in which they report changes in the fair value of inventory, which may include loans or loan commitments, in the periods in which the changes occur. Where FASB guidance is nonexistent, as is currently the case for fair-value accounting for loan commitments, firms are required to follow guidance from the AICPA, which provides industry specific accounting and auditing guidance that is cleared by FASB prior to publication. FASB officials said that it is currently appropriate for commercial banks and investment banks to follow different accounting models because of their different business models. When commercial banks make loan commitments, they must follow FASB’s Statement of Financial Accounting Standards (FAS) No. 91, Accounting for Nonrefundable Fees and Costs Associated with Originating or Acquiring Loans and Initial Direct Costs of Leases, which directs them to book the historic carrying value of the fees received for loan commitments as deferred revenue.In the historic carrying value model, commercial banks are not allowed to reflect changes in the fair value of loan commitments in their earnings. However, commercial banks are required to disclose the fair value of all loan commitments in the footnotes to their financial statements, along with the method used to determine fair value. Some investment banks follow the AICPA Audit and Accounting Guide, Brokers and Dealers in Securities, which directs them to record the fair value of loan commitments.The AICPA guidance is directed at broker- dealers within a commercial bank or investment bank holding company structure. However, some investment banks whose broker-dealer subsidiaries comprised a majority of the firms’ financial activity would also be required to follow the fair-value accounting model outlined in the AICPA guidance for instruments held in all subsidiaries. When using the fair- value model, investment banks must recognize in income gains or losses resulting from changes in the fair value of a financial instrument, such as a loan commitment. Investment banks said that they determine the current fair value of loan commitments based on the quoted market price for an identical or similar transaction or by modeling with market data if market prices are not available. According to FASB, although measurement of financial instruments at fair value has conceptual advantages, not all issues have been resolved, and FASB has not yet decided when, if ever, it will require essentially all financial instruments held in its inventory to be reported at fair value. A loan market expert said that, although the discipline of using market-based measures works well for some companies, fair-value accounting might not be the appropriate model for the entire wholesale loan industry. FASB said that one reason is that in the absence of a liquid market for loan commitments, there is potential for management manipulation of fair value because of the management discretion involved in choosing the data used to estimate fair value. Officials from some investment banks contended that adherence to different accounting models gave commercial banks a competitive advantage relative to investment banks in lending to customers who also purchased investment banking services. They alleged that commercial banks extended underpriced 364-day loan commitments to attract customers’ other, more profitable business—such as underwriting—but they were not required to report on their financial statements the difference in value, if any, between the original price of the loan commitment and the current market price. The investment bank officials contended that the current accounting standards facilitate this alleged underpricing of credit because commercial banks record loan commitments at their historic value rather than their current value, which might be higher or lower, and do not have to report the losses incurred in extending an allegedly underpriced loan commitment. Officials from some investment banks also claimed that the historic carrying value model allowed commercial banks to hide the risk of these allegedly underpriced loan commitments from stockholders and market analysts, because the model did not require them to report changes in the value of loan commitments. Officials said that differences in accounting for identical transactions might put investment banks at a disadvantage; compared with commercial banks when analysts reviewed their quarterly filings. Yet, as discussed in an earlier section, it is not clear that commercial banks underprice loan commitments. Although commercial and investment banks might have different values on their financial statements for similar loan commitments, both are subject to the same fair-value footnote disclosure requirements in which they report the fair value of all loan commitments in their financial statement footnotes, along with the method used to determine fair value. As a result, financial analysts and investors are presented with the same information about the commercial and investment banks’ loan commitments in the financial statement footnotes. According to FAS 107: Disclosures about Fair Value of Financial Instruments, in the absence of a quoted market price, firms estimate fair value based on (1) the market prices of similar traded financial instruments with similar credit ratings, interest rates, and maturity dates; (2) current prices (interest rates) offered for similar financial instruments in the entity’s own lending activities; or (3) valuations obtained from loan pricing services offered by various specialist firms or from other sources. FASB said that they have found no conclusive evidence that an active market for loan commitments exists; thus, the fair value recorded might frequently be estimated through modeling with market data. When a quoted market price for an identical transaction is not available, management judgment and the assumptions used in the model valuations could significantly affect the estimated fair value of a particular financial instrument. SEC and the banking regulators said the footnote disclosures included with financial statements, which are the same for both commercial banks and investment banks, were an integral part of communicating risk. They considered the statement of position and statement of operations alone to be incomplete instruments through which to convey the risk of loan commitments. They emphasized that to fully ascertain a firm’s financial standing, financial footnotes must be read along with the financial statements. Although different accounting models would likely introduce differences in the amount of revenue or loss recognized in any period, all differences in accounting for loan commitments that were not exercised would be resolved by the end of the commitment period. Any interim accounting differences between a commercial bank and investment bank would be relatively short-lived because most of these loan commitment periods are less than 1 year. Further, if a loan commitment were underpriced, an investment bank using the fair-value accounting model would recognize the difference between the fair value and the contractual price as a loss, while a commercial bank using the historical cost model would not be permitted to do so. This difference in the recognition of gains or losses would be evident in commercial and investment banks’ quarterly filings over the length of the commitment period. However, there is no clear advantage to one method over the other in accounting for loan commitments when the commitments are priced consistently between the two firms at origination. According to investment bankers we spoke with and staff from the AICPA, loan commitments generally decline in value after they are made. Under fair-value accounting, these declines in fair value are actually recognized by the investment bank as revenue because the reduction is recognized in a liability account known as deferred revenue. Therefore, if an investment bank participated with commercial banks in a loan commitment that was deemed underpriced, any intial loss recognized by the investment bank would be offset by each subsequent decline in the loan commitment’s fair value. Further, as discussed in an earlier section, it is not clear that commercial banks underprice loan commitments. Whether a commercial bank using the historic carrying value model or an investment bank using the fair-value model would recognize more revenue or loss on a given loan commitment earlier or later would depend on changes in the borrower’s credit pricing which reflects overall market trends and customer-specific events, as well as on the accounting model that the firm follows. In addition, when similar loan commitments held by a commercial bank and an investment bank are exercised and become loans, both firms would be subject to the same accounting standards if they had the intent and ability to hold the loan for the foreseeable future or to maturity. In this situation, both commercial banks and investment banks would be required to establish an allowance for probable or possible losses, based on the estimated degree of impairment of the loan commitment or historic experience with similar borrowers. If both an investment bank and a commercial bank decided to sell a loan that it previously had the intent and ability to hold for the foreseeable future or until maturity, the firms would follow different guidance that would produce similar results. A commercial bank would follow the AICPA’s Statement of Position 01-6, Accounting by Certain Entities (Including Entities With Trade Receivables) That Lend to or Finance the Activities of Others that was issued in December, 2001. According to this guidance, once bank management decides to sell a loan that had not been previously classified as held-for-sale, the loan’s value should be adjusted to the lower of historical cost or fair value, and any amount by which historical cost exceeds fair value should be accounted for as a valuation allowance. Further, as long as the loan’s fair value remained less than historical cost, any subsequent changes in the loan’s fair value would be recognized in income. The investment bank would follow the guidance in the AICPA’s Audit and Accounting Guide, Brokers and Dealers in Securities, and account for inventory, the loan in this instance, at fair value and recognize changes in the fair value in earnings. Regulatory capital is the minimum long-term funding level that financial institutions are required to maintain to cushion themselves against unexpected losses, and differing requirements for commercial banks and broker-dealers reflect distinct regulatory purposes. The primary purposes of commercial bank regulatory capital requirements are to maintain the safety and soundness of the banking and payment systems and to protect the deposit insurance funds. Under the bank risk-based capital guidelines, off-balance sheet transactions, such as loan commitments, are converted into one of four categories of asset equivalents. Unfunded loan commitments of one year or less are assigned to the zero percent conversion category, which means that banks are not required to hold regulatory capital for these commitments. In contrast, the primary purposes of broker-dealers’ capital requirements are to protect customers and other market participants from losses caused by broker-dealer failures and to protect the integrity of the financial markets. The SEC net capital rule requires broker-dealer affiliates of investment banks to hold 100- percent capital against loan commitments of any length. However, nonbroker-dealer affiliates of investment banks are not subject to any regulatory capital requirements, and are therefore not required to hold regulatory capital against loan commitments of any length. It is costly for banks or other institutions to hold capital; thus, to the extent that the level of regulatory capital requirements determines the amount of capital actually held, lower capital requirements can translate into lower costs. Officials from an investment bank contended that bank capital requirements gave commercial banks with investment affiliates a cost advantage they could use when lending to customers who also purchased underwriting business. They said that because banks’ regulatory capital requirements for unfunded credits of 1 year or less were zero, commercial banks had the opportunity to adjust the length of credit commitments to avoid capital charges. Furthermore, officials said that the ability to avoid capital charges allowed commercial banks to underprice these loan commitments, because they could extend the commitments without the cost of assigning additional regulatory capital. They pointed to the high percentage of credit commercial banks structured in 364-day facilities as evidence that banks structure underpriced credit in short-term arrangements to avoid capital charges. We found no evidence that bank regulatory capital requirements provided commercial banks with a competitive advantage. Although investment banks could face a 100-percent regulatory capital charge if they carried loan commitments in their broker-dealer affiliates, investment bank officials and officials from the SEC said that, in practice, investment banks carried loan commitments outside of their broker-dealer affiliates, and thus avoided all regulatory capital charges. Furthermore, banking regulators did not think that the current regulatory capital requirements adversely affected the overall amount of capital banks held, because commercial banks generally carried internal risk-based capital on instruments— including loan commitments—that were in excess of the amount of regulatory capital required. In addition, the banking regulators said that bank regulatory capital requirements had not affected banks’ use of loan commitments of 1 year or less. Although loan market data indicated that the percentage of investment-grade loans structured on 364-day terms has increased, commercial bank officials and banking regulators said that this shift was, in part, the banks’ response to the increased amount of risk in lending. Commercial banks have access to a range of services sometimes described as the federal safety net, which includes access to the Federal Reserve discount window and deposit insurance. The Federal Reserve discount window allows banks and other organizations to borrow funds from the Federal Reserve.Commercial banks’ ability to hold deposits backed by federal deposit insurance provides them with a low-cost source of funds available for lending. Industry observers and banking regulators agreed that commercial banks receive a subsidy from the federal safety net; however, they differed on the extent to which the subsidy was offset by regulatory costs. Although officials at the Federal Reserve and at an investment bank contended that access to the federal safety net gave commercial banks a net subsidy, officials from OCC and an industry observer said that the costs associated with access to the safety net might offset these advantages. We could not measure the extent to which regulatory costs offset the subsidy provided by the access to the federal safety net because reliable measures of the regulatory costs borne by banks were not available. Although the Gramm-Leach-Bliley Act of 1999, among other things, expanded the ability of financial services providers, including commercial banks and their affiliates, to offer their customers a wide range of products and services, it did not repeal the tying prohibitions of section 106, which remains a complex provision to enforce. Regulatory guidance has noted that some tying arrangements involving corporate credit are clearly lawful, particularly those involving ties between credit and traditional bank products. The targeted review conducted by the Federal Reserve and OCC, however, identified other arrangements that raise interpretive issues that were not addressed in prevailing guidance. The Federal Reserve recently issued for public comment a proposed interpretation of section 106 that is intended to provide banks and their customers a guide to the section. As the proposed interpretation notes, however, the complexity of section 106 requires a careful review of the facts and circumstances of each specific transaction. The challenge for the Federal Reserve and OCC remains that of enforcing a law where determining whether a violation exists or not depends on considering the precise circumstances of specific transactions; however, information on such circumstances is inherently limited. Customers have a key role in providing information that is needed to enforce section 106. However, the Federal Reserve and OCC have little information on customers’ understanding of lawful and unlawful tying under section 106 or on customers’ knowledge of the circumstances of specific transactions. The available evidence did not clearly support contentions that banks violated section 106 and unlawfully tied credit availability or underpriced credit to gain investment banking revenues. Corporate borrowers generally have not filed complaints with the banking regulators and attribute the lack of complaints, in part, to a lack of documentary evidence and uncertainty about which tying arrangements section 106 prohibits. The Federal Reserve and OCC report that they found only limited evidence of even potentially unlawful tying practices involving corporate credit during a targeted review that began in 2002, and they found that the banks surveyed generally had adequate policies and procedures in place to deter violations of section 106. However, while the teams conducting this review analyzed some specific transactions, they did not test a broad range of transactions, or outreach widely to bank customers. Information from customers could be an important step in assessing both implementation of and compliance with a bank’s policies and procedures. While regulators could take further steps to encourage customers to provide information, in addition to the recent Federal Reserve proposal, bank customers themselves are crucial to enforcement of section 106. Distinguishing lawful and unlawful tying depends on the specific facts and circumstances of individual transactions. Because the facts, if any, that would suggest a tying violation generally would not be found in the loan documentation that banks maintain and because bank customers have been unwilling to file formal complaints, effective enforcement of section 106 requires an assessment of other indirect forms of evidence. We therefore recommend that the Federal Reserve and the OCC consider taking additional steps to ensure effective enforcement of section 106 and section 23B, by enhancing the information that they receive from corporate borrowers. For example, the agencies could develop a communication strategy targeted at a broad audience of corporate bank customers to help ensure that they understand which activities are permitted under section 106 as well as those that are prohibited. This strategy could include publication of specific contact points within the agencies to answer questions from banks and bank customers about the guidance in general and its application to specific transactions, as well as to accept complaints from bank customers who believe that they have been subjected to unlawful tying. Because low priced credit could indicate a potential violation of section 23B, we also recommend that the Federal Reserve assess available evidence regarding loan pricing behavior, and if appropriate, conduct additional research to better enable examiners to determine whether transactions are conducted on market terms and that the Federal Reserve publish the results of this assessment. We requested comments on a draft of this report from the Federal Reserve and OCC. We received written comments from the Federal Reserve and OCC that are summarized below and reprinted in appendixes II and III respectively. The Comptroller of the Currency and the General Counsel of the Board of Governors of the Federal Reserve System replied that they generally agreed with the findings of the report and concurred in our recommendation. Federal Reserve and OCC staff also provided technical suggestions and corrections that we have incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we will send copies to the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman, House Committee on Energy and Commerce; the Chairman of the Board of Governors of the Federal Reserve System; and the Comptroller of the Currency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact James McDermott or me at (202) 512-8678. Key contacts and major contributors to this report are listed in appendix IV. Because commercial and investment banks follow different accounting models, there are differences in the financial statement presentation of some similar transactions. This appendix summarizes the differences, under generally accepted accounting principles in how commercial banks and investment banks account for loan commitments—specifically commercial paper back-up credit facilities—using hypothetical scenarios to illustrate how these differences could affect the financial statements of a commercial and investment bank. We use three hypothetical scenarios to illustrate the accounting differences that would occur between the commercial and investment banks for similar transactions if (1) a loan commitment were made, (2) the loan commitment was exercised by the borrower and the loan was actually made, and (3) the loan was subsequently sold. This appendix does not assess the differences in accounting that would occur if a loan was made by both a commercial bank and an investment bank when one entity decided to hold the loan to maturity and the other opted to hold the loan as available for sale, because the basis for these actions and the resulting accounting treatment are not similar. The examples in this appendix demonstrate that, as of a given financial statement reporting date, differences would likely exist between commercial and investment banks in the reported value of a loan commitment and a loan resulting from an exercised commitment, as well as the recognition of the related deferred revenue. In addition, the volatility of the fair value of loan commitments and the related loan, if the commitment were exercised, would be reflected more transparently in an investment bank’s financial statements, because an investment bank must recognize these changes in value in earnings as they occur in net income. In contrast, commercial banks are not allowed to recognize changes in the fair value of the loan commitment, its related deferred revenue, or the related loan (if drawn). The differences in accounting between commercial banks and investment banks are temporary; and, as the examples in the following sections show, whether a commercial bank or an investment bank recognizes more fee revenue first would depend on various market conditions, including interest rates and spreads. Similarly, any differences between the fair value of a loan or loan commitment on an investment bank’s book and the net book value of a similar loan or loan commitment on a commercial bank’s books would be eliminated by the end of the loan term or commitment period. Given that loan commitment terms are usually for less than 1 year, any accounting differences between the commercial and investment banks would be for a relatively short period of time. Further, both commercial and investment banks are required to make similar footnote disclosures about the fair value of their financial instruments. Thus, neither accounting model provides a clear advantage over the life of the loan commitment or the loan if the commitment were exercised. Since 1973, the Financial Accounting Standards Board (FASB) has been establishing private-sector financial accounting and reporting standards. In addition, the American Institute of Certified Public Accountants (AICPA) Accounting Standards Executive Committee also provides industry- specific authoritative guidance that is cleared with FASB prior to publication. Where FASB guidance is nonexistent, as is currently the case in fair-value accounting for loan commitments, firms are required to follow AICPA guidance. Most commercial banks generally follow a mixed-attribute accounting model, where some financial assets and liabilities are measured at historical cost, some at the lower of cost or market value and some at fair value. In accounting for loan commitments, banks follow the guidance in Statement of Financial Accounting Standards (FAS) Number 91 Accounting for Nonrefundable Fees and Costs Associated with Originating or Acquiring Loans and Initial Direct Costs of Leases.Broker-dealer affiliates and investment banks whose primary business is to act as a broker-dealer follow the AICPA’s Audit and Accounting Guide, Brokers and Dealers in Securities, where the inventory (that may include loan commitments) are recorded at the current fair value and the change in value from the prior period is recognized in net income. Further, FASB currently has a project on revenue recognition that includes the accounting for loan commitment fees by investment banks and others. The purpose of that project includes addressing the inconsistent recognition of commitment fee income and may eliminate some of the accounting differences that exist between commercial banks and investment banks described in this appendix. FASB has stated that it is committed to work diligently toward resolving, in a timely manner, the conceptual and practical issues related to determining the fair values of financial instruments and portfolios of financial instruments. Further, FASB has stated that while measurement at fair value has conceptual advantages, all implementation issues have not yet been resolved; and the Board has not yet decided when, if ever, it will be feasible to require essentially all financial instruments to be reported at fair value in the basic financial statements. Although FASB has not yet issued comprehensive guidance on fair-value accounting, recent literature has stated that the fair-value accounting model provides more relevant information about financial assets and liabilities and can keep up with today’s complex financial instruments better than the historical cost accounting model. The effect of the fair-value accounting model is to recognize in net income during the current accounting period amounts that, under the historical cost model, would have been referred to as unrealized gains or losses because the bank did not sell or otherwise dispose of the financial instrument. Further, proponents of the fair-value accounting model contend that unrealized gains and losses on financial instruments are actually lost opportunities as of a specific date to realize a gain or loss by selling or settling a financial instrument at a current price. However, a disadvantage of fair-value accounting exists when there is not an active market for the financial instrument being valued. In this case, the fair value is more subjective and is often determined by various modeling techniques or based on the discounted value of expected future cash flows. On the first day of an accounting period, Commercial Bank A and Investment Bank B each made a $100 million loan commitment to a highly rated company to back up a commercial paper issuance. This loan commitment was irrevocable and would expire at the end of three quarterly accounting periods. Because the loan commitment was issued to a highly rated company, both banks determined that the chance of the company drawing on the facility was remote. Both banks received $10,000 in fees for these loan commitments. Commercial Bank A followed the guidance in FAS No. 91 and recorded this transaction on a historical cost basis while Investment Bank B, subject to specialized accounting principles that require fair-value accounting, reported changes in fair value included the effect of these changes in earnings. Revenue Recognition for the Upon receipt of the loan commitment fee, Commercial Bank A would Commercial Bank record the $10,000 as a liability, called deferred revenue, because the bank would be obligated to perform services in the future in order to “earn” this revenue. In practice, because of the relatively small or immaterial amounts of deferred revenue compared with other liabilities on a bank’s statement of position (balance sheet), this amount would not be reported separately and would likely be included in a line item called “other liabilities.”Commercial Bank A would follow the accounting requirements of FAS No. 91 and recognize the revenue as service-fee income in equal portions over the commitment period, regardless of market conditions—a practice often referred to as revenue recognition on a straight-line basis. Thus, at the end of the first accounting period, Commercial Bank A would reduce the $10,000 deferred revenue on its statement of position (balance sheet) by one-third or $3,333 and record the same amount of service-fee revenue on the statement of operations (income statement). The same accounting would occur at the end of the second and third accounting periods, so that an equal portion of service revenue would have been recognized during each period that the bank was obligated to loan the highly rated company $100 million. Regarding disclosure of the $100 million commitment, Commercial Bank A would not report the value of the loan commitment on its balance sheet. However, the bank would disclose in the footnotes to its financial statements the fair value of this commercial paper back-up facility as well as the method used to estimate the fair value. Although AICPA’s Audit and Accounting Guide, Brokers and Dealers in Securities does not provide explicit guidance for how Investment Bank B would account for this specific transaction, the guide provides relevant guidance on accounting for loan commitments in general. This guide states that Investment Bank B would account for inventory, including financial instruments such as a commercial paper back-up facility, at fair value and report changes in the fair value of the loan commitment in earnings. When changes occurred in the fair value of the loan commitment, Investment Bank B would need to recognize these differences by adjusting the balance of the deferred revenue account to equal the new fair value of the loan commitment. Generally, quoted market prices of identical or similar instruments, if available, are the best evidence of the fair value of financial instruments. If quoted market prices are not available, as is often the case with loan commitments, management’s best estimate of fair value may be based on the quoted market price of an instrument with similar characteristics; or it may be developed by using certain valuation techniques such as estimated future cash flows using a discount rate commensurate with the risk involved, option pricing models, or matrix pricing models. A corresponding entry of identical value would be made to revenue during the period in which the change in fair value occurred. Once the commitment period ended, as described in the previous paragraph, the deferred revenue account would be eliminated and the remaining balance recorded as income. If market conditions changed shortly after Investment Bank B issued this credit facility and its fair value declined by 20 percent to $8,000, Investment Bank B would reduce the deferred revenue account on its statement of position (balance sheet) to $8,000, the new fair value. Investment Bank B would recognize $2,000 of service-fee income, the amount of the change in value from the last reporting period, in its statement of operations (income statement). Investment Bank B would also disclose in its footnotes the fair value of this credit facility, as well as the method used to estimate the fair value. If during the second accounting period there was another change in market conditions and the value of this credit facility declined another 5 percent of the original amount to $7,500, Investment Bank B would decrease the balance in the deferred revenue account to $7,500 and recognize $500 in service-fee revenue. Further, Investment Bank B would disclose in its footnotes the fair value of this credit facility. During the accounting period in which the commitment to lend $100 million was due to expire, accounting period 3 in this example, the balance of the deferred revenue account would be recognized because the commitment period had expired and the fair value would be zero. Thus, $7,500 would be recognized in revenue and the balance of deferred revenue account eliminated. In this accounting period, there would be no disclosure about the fair value of the credit facility. The following table summarizes the amount of revenue Commercial Bank A and Investment Bank B would recognize and the balance of the deferred revenue account for each of the three accounting periods when there were changes in the value of the loan commitments. Commercial Bank A would recognize more service-fee income in accounting periods 1 and 2 than Investment Bank B. However, this situation would be reversed in period 3, when Investment Bank B would recognize more revenue. Thus, differences in the value of the loan commitment and the amount of revenue recognized would likely exist between specific accounting periods, reflecting the volatility of the financial markets more transparently in Investment B’s financial statements. The magnitude of the difference is determined by the market conditions at the time and could be significant or minor. However, these differences would be resolved by the end of the commitment period, when both entities would have recognized the same amount of total revenue for the loan commitment. Commercial Bank A and Investment Bank B issued the same loan commitment described previously. However, at the end of the second accounting period, the highly rated company exercised its right to borrow the $100 million from each provider because its financial condition had deteriorated and it could no longer access the commercial paper market. The accounting treatment for this loan would depend upon whether bank management had the intent and ability to hold the loan for the foreseeable future or until maturity. AICPA Task Force members and some investment bankers told us that in practice, this loan could be either held or sold, and as a result, the accounting for both is summarized in the following sections. At the time the loan was made, Commercial Bank A would record the $100 million dollar loan as an asset on its statement of position (balance sheet). Investment Bank B would initially record this loan at its historical cost basis, less the loan commitment’s fair value at the time the loan was drawn ($100 million - $7,500). Further, based on an analysis by the banks’ loan review teams, a determination of “impairment” would be made. According to FAS 114, Accounting by Creditors for Impairment of a Loan, “a loan is impaired when, based on current information and events, it is probable that a creditor will be unable to collect all amounts due according to the contractual terms of the loan agreement.” If the loan were determined to be impaired, FAS 114 states that, the bank would measure the amount of impairment as either the (1) present value of expected future cash flows discounted at the loan’s effective interest rate, (2) loan’s observable market price, or (3) fair value of the collateral if the loan were collateral dependent. FAS 114 directs both banks to establish an allowance for losses when the measure of the impaired loan is less than the recorded investment in the loan (including accrued interest, net of deferred loan fees or costs and unamortized premium or discount) by creating a valuation allowance that reduces the recorded value of the loan with a corresponding charge to bad- debt expense. When there are significant changes in the amount or timing of the expected future cash flows from this loan, the banks would need to adjust, up or down, the loan-loss allowance as appropriate so that the net balance of the loan reflects management’s best estimate of the loan’s cash flows. However, the net value of the loan cannot exceed the recorded investment in the loan. If the loan were not impaired, both banks would still record an allowance for credit losses in accordance with FAS 5, Accounting for Contingencies, when it was probable that a future event would likely occur that would cause a loss and the amount of the loss was estimable. Thus, both banks would establish an allowance for loss in line with historical performance for borrowers of this type. Because the loan was performing, both banks would receive identical monthly payments of principal and interest. Generally, these cash receipts would be applied in accordance with the loan terms, and a portion would be recorded as interest income; and the balance applied would reduce the banks’ investment in the loan. At the end of the loan term, the balance and the related allowance for this loan would be eliminated. FAS 91 also directs both banks to recognize the remaining unamortized commitment fee over the life of the loan as an adjustment to interest income. Because the borrower’s financial condition had deteriorated, both banks would likely have charged a higher interest rate than the rate stated in the loan commitment. As a result, at the time it becomes evident that the loan is to be drawn, Investment Bank B would record a liability on its balance sheet to recognize the difference between the actual interest rate of the loan and the interest rate at which a loan to a borrower with this level of risk would have been made—in essence the fair value interest rate. This liability would also be amortized by Investment Bank B over the life of the loan as an adjustment to interest income. If Commercial Bank A and Investment Bank B’s policies both permitted the firms to only hold loans for the foreseeable future or until maturity when the borrowers were highly rated, it is unlikely that the banks would keep the loan in the previous hypothetical scenario and would sell the loan soon after it was made. Although the banks would follow different guidance there would be similar results. Commercial Bank A would follow the guidance in the AICPA Statement of Position 01-6. According to this guidance, once bank management decides to sell a loan that had not been previously classified as held-for-sale, the loan’s value should be adjusted to the lower of historical cost or fair value, and any amount by which historical cost exceeds fair value should be accounted for as a valuation allowance. Further, as long a the loan’s fair value remained less than historical cost, any subsequent changes in the loan’s fair value would be recognized in other comprehensive income. Investment Bank B would follow the guidance in the AICPA’s Audit and Accounting Guide, Brokers and Dealers in Securities, as it did with loan commitments, and account for inventory at fair value and report changes in the fair value of the loan in net income. For example, if bank management decided to sell the loan soon after it was drawn when some payments had been made to reduce the principal balance and the net book value of this loan was $88,200,000 (unpaid principal balance of $90,000,000 less the related allowance of $1,800,000) and the fair value was 97 percent of the unpaid principal balance or $87,300,000, both banks would recognize the decline in value of $900,000 in earnings. While the loan remained available-for-sale, any changes in its fair value would be recorded in net income. For example, if the loan’s fair value declined further to $85,500,000, both banks would recognize the additional decline in value of $1,800,000 in earnings. Table 2 below summarizes the accounting similarities between Commercial Bank A and Investment Bank B for the loan sale. Although the two banks followed different guidance, the effect of the loan sale is the same for both banks. In addition to those individuals named above, Daniel Blair, Tonita W. Gillich, Gretchen Pattison, Robert Pollard, Paul Thompson, and John Treanor made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Investment affiliates of large commercial banks have made competitive inroads in the annual $1.3 trillion debt-underwriting market. Some corporate borrowers and officials from unaffiliated investment banks have alleged that commercial banks helped their investment affiliates gain market share by illegally tying and underpricing corporate credit. This report discusses these allegations, the available evidence related to the allegations, and federal bank regulatory agencies' efforts to enforce the antitying provisions. Section 106 of the Bank Holding Company Act Amendments of 1970 prohibits commercial banks from "tying," a practice which includes conditioning the availability or terms of loans or other credit products on the purchase of certain other products and services. The law does permit banks to tie credit and traditional banking products, such as cash management, and does not prohibit banks from considering the profitability of their full relationship with customers in managing those relationships. Some corporate customers and officials from an investment bank not affiliated with a commercial bank have alleged that commercial banks illegally tie the availability or terms, including price, of credit to customers' purchase of other services. However, with few exceptions, formal complaints have not been brought to the attention of the regulatory agencies and little documentary evidence surrounding these allegations exists, in part, because credit negotiations are conducted orally. Further, our review found that some corporate customers' claims involved lawful ties between traditional banking products rather than unlawful ties. These findings illustrate a key challenge for banking regulators in enforcing this law: while regulators need to carefully consider the circumstances of specific transactions to determine whether the customers' acceptance of an unlawfully tied product (that is, one that is not a traditional banking product) was made a condition of obtaining credit, documentary evidence on those circumstances might not be available. Therefore, regulators may have to look for indirect evidence to assess whether banks unlawfully tie products and services. Although customer information could have an important role in helping regulators enforce section 106, regulators generally have not solicited information from corporate bank customers. The Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency (OCC) recently reviewed antitying policies and procedures of several large commercial banks. The Federal Reserve and OCC, however, did not analyze a broadly-based selection of transactions or generally solicit additional information from corporate borrowers about their knowledge of transactions. The agencies generally found no unlawful tying arrangements and concluded that these banks generally had adequate policies and procedures intended to prevent and detect tying practices. The agencies found variation among the banks in interpretation of the tying law and its exceptions. As a result, in August 2003, the Board of Governors of the Federal Reserve, working with OCC, released for public comment new draft guidance, with a goal of better informing banks and their customers about the requirements of the antitying provision.
A decade after the cold war ended, the Army recognized that its combat force was not well suited to perform the operations it faces today and is likely to face in the future. The Army’s heavy forces had the necessary firepower but required extensive support and too much time to deploy. Its light forces could deploy rapidly but lacked firepower. To address this mismatch, the Army decided to radically transform itself into a new “Future Force.” The Army expects the Future Force to be organized, manned, equipped, and trained for prompt and sustained land combat. This translates into a force that is responsive, technologically advanced, and versatile. These qualities are intended to ensure the Future Force’s long-term dominance over evolving, sophisticated threats. The Future Force is to be offensively oriented and will employ revolutionary operational concepts, enabled by new technology. This force is to fight very differently than the Army has in the past, using easily transportable lightweight vehicles, rather than traditional heavily armored vehicles. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together. The Army has determined that it needs more agile forces. Agile forces would possess the ability to seamlessly and quickly transition among various types of operations, from support operations to warfighting and back again. They would adapt faster than the enemy, thereby denying it the initiative. Agile forces would allow commanders of small units the authority and high quality information to act quickly to respond to dynamic situations. To be successful, therefore, the transformation must include more than new weapons. It must be extensive, encompassing tactics and doctrine as well as the very culture and organization of the Army. FCS will provide the majority of weapons and sensor platforms that composes the new brigade-like modular units of the Future Force known as Units of Action. Each unit is to be a rapidly deployable fighting organization about the size of a current Army brigade but with the combat power and lethality of the current larger division. The Army also expects FCS-equipped units of action to provide significant warfighting capabilities to the overall joint force. The Army is reorganizing its current forces into modular, brigade-based units akin to units of action. FCS is a family of 18 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an information network. These include, among other things, eight new ground vehicles to replace current vehicles such as tanks, infantry carriers and self-propelled howitzers, four different unmanned aerial vehicles, several unmanned ground vehicles, and attack missiles that can be positioned in a box-like structure. The manned ground vehicles are to be a fraction of the weight of current weapons such as the Abrams tank and Bradley fighting vehicle, yet are to be as lethal and survivable. At a fundamental level, the FCS concept is replacing mass with superior information; that is, to see and hit the enemy first, rather than to rely on heavy armor to withstand a hit. The essence of the FCS concept itself—to provide the lethality and survivability of the current heavy force with the sustainability and responsiveness of a force that weighs a fraction as much—has the intrinsic attraction of doing more with less. The FCS concept has a number of progressive features, that demonstrate the Army’s desire to be proactive in its approach to preparing for potential future conflicts and its willingness to break with tradition in developing an appropriate response to the changing scope of modern warfare. If successful, the program will leverage individual capabilities of weapons and platforms and will facilitate interoperability and open system designs. This is a significant improvement over the traditional approach of building superior individual weapons that must be netted together after the fact. Also, the system-of- systems network and weapons could give managers the flexibility to make best value tradeoffs across traditional program lines. This transformation of the Army, in terms of both operations and equipment, is under way with the full cooperation of the Army warfighter community. In fact, the development and acquisition of FCS is being accomplished using a collaborative relationship between the developer (program manager), the contractor, and the warfighter community. The FCS program was approved to start system development and demonstration in May 2003. On July 21, 2004, the Army announced its plans to restructure the program. The restructuring responded to direction from the Army Chief of Staff and addresses risks and other issues identified by external analyses. Its objectives include: Spinning off ripe FCS capabilities to current force units; Meeting Congressional language for fielding the Non-Line of Sight Cannon; Retaining the system-of-systems focus and fielding all 18 systems; Increasing the overall schedule by 4 years; and Developing a dedicated evaluation unit to demonstrate FCS capabilities The program restructuring contained several features that reduce risk— adding 4 additional years to develop and mature the manned ground vehicles, adding demonstrations and experimentation, and establishing an evaluation unit to demonstrate FCS capabilities. The program restructuring also adds scope to the program by reintroducing four deferred systems, adding four discrete spirals of FCS capabilities to the current force, and accelerating the development of the network. About $6.1 billion was added to the system development and demonstration contract and the Army has recently announced that the detailed revision of the contract has been completed. To develop the information on whether the FCS program was following a knowledge-based acquisition strategy and the current status of that strategy, we interviewed officials of the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Secretary of Defense’s Cost Analysis Improvement Group; the Assistant Secretary of the Army (Acquisition, Logistics, and Technology); the Army’s Training and Doctrine Command; Surface Deployment and Distribution Command; the Program Manager for the Unit of Action (previously known as Future Combat Systems); the Future Combat Systems Lead Systems Integrator; and LSI One Team contractors. We reviewed, among other documents, the Future Combat Systems’ Operational Requirements Document, the Acquisition Strategy Report, the Baseline Cost Report, the Critical Technology Assessment and Technology Risk Mitigation Plans, and the Integrated Master Schedule. We attended the FCS Management Quarterly Reviews, In-Process Reviews, and Board of Directors Reviews. In our assessment of the FCS, we used the knowledge-based acquisition practices drawn from our large body of past work as well as DOD’s acquisition policy and the experiences of other programs. We discussed the issues presented in this statement with officials from the Army and the Secretary of Defense, and made several changes as a result. We performed our review from May 2004 to March 2005 in accordance with generally accepted auditing standards. The FCS program faces significant challenges in setting requirements, developing systems, financing development, and managing the effort. It is the largest and most complex acquisition ever attempted by the Army. The Army wants the FCS-equipped unit of action to be as lethal and survivable as the current heavy force, but to be significantly more responsive and sustainable. For the unit of action to be lethal, it must have the capability to address the combat situation, set conditions, maneuver to positions of advantage, and engage enemy formations at longer ranges and with greater precision than the current force. To provide this level of lethality and reduce the risk of detection, FCS must provide high single- shot weapon effectiveness. To be as survivable as the current heavy force, the unit of action must find and kill the enemy before being seen and identified. The individual FCS systems will also rely on a layered system of protection involving several technologies that lowers the chances of a vehicle or other system being seen and hit by the enemy. To be responsive, the unit of action must be able to rapidly deploy anywhere in the world and be rapidly transportable by various means—particularly by the C-130 aircraft—and ready to fight upon arrival. To facilitate rapid transportability on the battlefield, FCS vehicles are to match the weight and size constraints of the C-130 aircraft. The unit of action is to be capable of sustaining itself for periods of 3 to 7 days, depending on the level of conflict—necessitating a small logistics footprint. This requires subsystems with high reliability and low maintenance, reduced demand for fuel and water, highly effective weapons, and fuel-efficient engines. Meeting all these requirements is unprecedented not only because of the difficulty each represents individually, but because the solution for one requirement may work against another requirement. For example, solutions for lethality could increase vehicle weight and size. Solutions for survivability could increase complexity and lower reliability. It is the performance of the information network that is the linchpin for meeting the other requirements. It is the quality and speed of the information that will enable the lethality and survivability of smaller vehicles. It is smaller vehicles that enable responsiveness and sustainability. In the Army’s own words, the FCS is “the greatest technology and integration challenge the Army has ever undertaken.” The Army intends to concurrently develop a complex, system-of-systems–an extensive information network and 18 major weapon systems. The sheer scope of the technological leap required for the FCS involves many elements. For example: A first-of-a-kind network will have to be developed that will entail development of unprecedented capabilities—on-the-move communications, high-speed data transmission, dramatically increased bandwidth, and simultaneous voice, data and video; The design and integration of 18 major weapon systems or platforms has to be done simultaneously and within strict size and weight limitations; At least 53 technologies that are considered critical to achieving FCS’ critical performance capabilities will need to be matured and integrated into the system-of-systems; Synchronizing the development, demonstration, and production of as many as 157 complementary systems with the FCS content and schedule. This will also involve developing about 100 network interfaces so the FCS can be interoperable with other Army and joint forces; and At least an estimated 34 million lines of software code will need to be generated (about double that of the Joint Strike Fighter, which had been the largest defense undertaking in terms of software to be developed). Based on the restructured program, the FCS program office initially estimated that FCS will require $28.0 billion for research and development and around $79.9 billion for the procurement of 15 units of action. The total program cost is expected to be at least $107.9 billion. These are fiscal year 2005 dollars. Since this estimate, the Army has released an updated research and development cost estimate of $30.3 billion in then-year dollars. An updated procurement estimate is not yet available. The Army is continuing to refine these cost estimates. As estimated, the FCS will command a significant share of the Army’s acquisition budget, particularly that of ground combat vehicles, for the foreseeable future. In fiscal year 2006, the FCS budget request of $3.4 billion accounts for 65 percent of the Army’s proposed spending on programs in system development and demonstration and 35 percent of that expected for all research, development, test and evaluation activities. As the FCS begins to command large budgets, it will compete with other major financial demands. Current military operations, such as in Afghanistan and Iraq, require continued funding. Since September 2001, DOD has needed over $240 billion in supplemental appropriations to support the global war on terrorism. Current operations are also causing faster wear on existing weapons, which will need refurbishment or replacement sooner than planned. The equipment used by the current force, such as Abrams tanks and Bradley Fighting Vehicles, is expected to remain in the active inventory until at least 2030. The cost to upgrade and maintain this equipment over that length of time has not been estimated but could be substantial. Also, the cost of converting current forces to new modular, brigade-based units is expected to be at least $48 billion. Further, FCS is part of a significant surge in the demand for new weapons. Just 4 years ago, the top 5 weapon systems cost about $280 billion; today, in the same base year dollars, the top 5 weapon systems cost about $521 billion. If megasystems like FCS are estimated and managed with traditional margins of error, the financial consequences are huge, especially in light of a constrained discretionary budget. The Army has employed a management approach that centers on a Lead System Integrator (LSI) and a non-standard contracting instrument, known as an Other Transaction Agreement (OTA). The Army advised us that it did not believe it had the resources or flexibility to use its traditional acquisition process to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. Although there is no complete consensus on the definition of LSI, those we are aware of appear to be prime contractors with increased program management responsibilities. These responsibilities have included greater involvement in requirements development, design and source selection of major system and subsystem subcontractors. The government also has used the LSI approach on programs that require system-of-systems integration. The Army selected Boeing as the LSI for the FCS system development and demonstration phase in May 2003. The Army and Boeing established a One-Team management approach with several first tier subcontractors to execute the program. According to the Army, Boeing has awarded 20 of 24 first tier subcontracts, to 17 different subcontractors. The One-Team members and their responsibilities are depicted in table 1. Boeing was awarded the LSI role under an OTA which is not subject to the Federal Acquisition Regulation (FAR). Consequently, when using an OTA, DOD contracting officials have considerable flexibility to negotiate the agreement terms and conditions. This flexibility requires DOD to use good business sense and to incorporate appropriate safeguards to protect the government’s interests. The OTA used for FCS includes several FAR or Defense FAR Supplement clauses, many of which flow down to subcontracts. The value of the agreement between the Army and Boeing is approximately $21 billion. It is a cost reimbursement contract. Congress has incrementally expanded the use and scope of other transaction authority since first authorizing its use more than a decade ago. In 1989, Congress gave DOD, acting through the Defense Advanced Research Projects Agency, authority to temporarily use other transactions for basic, applied, and advanced research projects. In 1991, Congress made this authority permanent and extended it to the military departments. In 1993, Congress enacted Section 845 of the National Defense Authorization Act for Fiscal Year 1994, which provided DARPA with authority to use, for a 3-year period, other transactions to carry out prototype projects directly relevant to weapons or weapon systems proposed to be acquired or developed by DOD. Subsequent amendments have extended this authority to the military departments and other defense agencies. Most recently, the National Defense Authorization Act for Fiscal Year 2004 extended the prototype project authority until 2008 and provided for a pilot program to transition some other transaction prototype projects to follow-on production contracting. According to program officials, under the LSI and OTA arrangement on FCS, the Army primarily participates in the program through Integrated Product Teams that are used to make coordinated management decisions in the program about issues related to requirements, design, horizontal integration and source selection. During the past year, the FCS underwent a significant restructuring, which added 4 years to the schedule for reducing risk, increasing the demonstration of FCS capabilities, and harvesting successes for the current force. Yet even with these improvements, the FCS is still at significant risk for not delivering planned capability within budgeted resources. This risk stems from the scope of the program’s technical challenges and the low level of knowledge demonstrated thus far. Our previous work has shown that program managers can improve their chances of successfully delivering a product if they employ a knowledge- based decision-making process. We have found for a program to deliver a successful product within available resources, managers should build high levels of demonstrated knowledge before significant commitments are made. In essence, knowledge supplants risk over time. This building of knowledge can be described in three levels that should be attained over the course of a program: First, at program start, the customer’s needs should match the developer’s available resources—mature technologies, time, and funding. An indication of this match is the demonstrated maturity of the technologies needed to meet customer needs. Second, about midway through development, the product’s design should be stable and demonstrate that it is capable of meeting performance requirements. The critical design review is the vehicle for making this determination and generally signifies the point at which the program is ready to start building production-representative prototypes. Third, by the time of the production decision, the product must be shown to be producible within cost, schedule, and quality targets and have demonstrated its reliability. It is also the point at which the design must demonstrate that it performs as needed through realistic system-level testing. The three levels of knowledge are related, in that a delay in attaining one delays those that follow. Thus, if the technologies needed to meet requirements are not mature, design and production maturity will be delayed. On the successful commercial and defense programs we have reviewed, managers were careful to conduct development of technology separately from and ahead of the development of the product. For this reason, the first knowledge level is the most important for improving the chances of developing a weapon system within cost and schedule estimates. DOD’s acquisition policy has adopted the knowledge-based approach to acquisitions. DOD policy requires program managers to provide knowledge about key aspects of a system at key points in the acquisition process. Program managers are also required to reduce integration risk and demonstrate product design prior to the design readiness review and to reduce manufacturing risk and demonstrate producibility prior to full-rate production. DOD programs that have not attained these levels of knowledge have experienced cost increases and schedule delays. We have recently reported on such experiences with the F/A-22, the Joint Strike Fighter, the Airborne Laser, and the Space Based Infrared System High. For example, the $245 billion Joint Strike Fighter’s acquisition strategy does not embrace evolutionary, knowledge-based techniques intended to reduce risks. Key decisions, such as its planned 2007 production decision, are expected to occur before critical knowledge is captured. If time were taken now to gain knowledge DODcould avoid placing sizable investments in production capabilities at risk of expensive changes. The FCS program has proceeded with low levels of knowledge. In fact, most of the activities that have taken place during its first 2 years should have been completed before starting system development and demonstration. It may be several years before the program reaches the level of knowledge it should have had at program start. Consequently, the Army is depending on a strategy that must concurrently define requirements, develop technology, design products, and test products. Progress in executing the program thus far does not inspire confidence: The requirements process is taking longer that planned, technology maturity may actually have regressed, and a program that is critical for the FCS network has recently run into problems and has been delayed. Figure 2 depicts how the FCS strategy compares with the best practices described above. The white space in figure 2 suggests the knowledge between best practices and the FCS program. Clearly, the program has a tremendous amount of ground to cover to close its knowledge gaps to the point that it can hold the design reviews as scheduled and make decisions on building prototypes, testing, and beginning production with confidence. Several other observations can be made from the figure: A match between mature technologies and firm requirements was not made at program start. The preliminary design review, which ideally is conducted near the program start decision to identify disconnects between the design and the requirements, will be held 5 years into the program. The critical design review, normally held midway through development, is scheduled to take place in the seventh year of a nine-year program. The first test of all FCS elements will take place after the production decision. The FCS program entered system development and demonstration without demonstrating a match between resources and requirements, and will not be in a position to do so for a number of years. The Army now expects to have a reasonably well defined set of requirements by the October 2006 interim preliminary design review. The Army has been working diligently to define these requirements, but the task is very difficult given that there are over 10,000 specific system-of-systems requirements that must collectively deliver the needed lethality, survivability, responsiveness, and sustainability. For example, the Army is conducting at least 120 studies to identify the design tradeoffs necessary before firming up requirements. As of December 2004, 69 remain to be completed. Those to be completed will guide key decisions on the FCS, such as the weight and lethality required of the manned ground vehicles. On the resources side, last year we reported that 75 percent of FCS technologies were immature when the program started in 2003; a September 2004 independent assessment has since shown that only 1 of the more than 50 FCS critical technologies is fully mature. The Army employed lower standards than recommended by best practices or DOD policy in determining technologies acceptable for the FCS program. As a result, it will have to develop numerous technologies on a tight schedule and in an environment that is designed for product development. If all goes as planned, the Army estimates that most of the critical technologies will reach a basic level of maturity by the 2010 Critical Design Review and full maturity by the production decision. This type of technical knowledge is critical to the process of setting realistic requirements, which are needed now. In addition, a program critical to the FCS network and a key element of FCS’ first spiral, the Joint Tactical Radio System, recently encountered technical problems and may be delayed 2 years. We provide more detail on this program later. The FCS strategy will result in much demonstration of actual performance late in development and early in production, as technologies mature, prototypes are tested, and the network and systems are brought together as a system-of-systems. A good deal of the demonstration of the FCS design will take place over a 3-year period, starting with the critical design review in 2010 through the first system-level demonstration of all 18 FCS components and the network in 2013. This compression is due to the desired fielding date of 2014, coupled with the late maturation of technologies and requirements previously discussed. Ideally, a critical design review should be held midway through development—around 2008 for FCS—to confirm the design is stable enough to build production representative prototypes for testing. DOD policy refers to the work up to the critical design review as system integration, during which individual components of a system are brought together. The policy refers to the work after the critical design review as system demonstration, during which the system as a whole demonstrates its reliability as well as its ability to work in the intended environment. The building of production-representative prototypes also provides the basis to confirm the maturity of the production processes. For the FCS, the critical design review will be held just 2 years before the production decision. The FCS program is planning to have prototypes available for testing prior to production but they will not be production-representative prototypes. The Army does not expect to have even a preliminary demonstration of all elements of the FCS system-of-systems until sometime in 2013, the year after the production decision. This makes the program susceptible to “late-cycle churn,” a condition that we reported on in 2000. Late-cycle churn is a phrase private industry has used to describe the efforts to fix a significant problem that is discovered late in a product’s development. Often, it is a test that reveals the problem. The “churn” refers to the additional—and unanticipated—time, money, and effort that must be invested to overcome the problem. Problems are most serious when they delay product delivery, increase product cost, or “escape” to the customer. The discovery of problems in testing conducted late in development is a fairly common occurrence on DOD programs, as is the attendant late-cycle churn. Often, tests of a full system, such as launching a missile or flying an aircraft, become the vehicles for discovering problems that could have been found out earlier and corrected less expensively. When significant problems are revealed late in a weapon system’s development, the reaction—or churn—can take several forms: extending schedules to increase the investment in more prototypes and testing, terminating the program, or redesigning and modifying weapons that have already made it to the field. While DOD has found it acceptable to accommodate such problems over the years, this will be a difficult proposition for the FCS, given the magnitude of its cost in an increasingly competitive environment for investment funds. The Army has made some concrete progress in building some of the foundation of the program that will be essential to demonstrating capabilities. For example, the System-of-Systems Integration Lab—where the components and systems will be first tested—has been completed. Initial versions of the System-of-Systems Common Operating Environment, the middleware that will provide the operating system for FCS software, have been released. Several demonstrations have taken place, including the Precision Attack Munition, the Non-Line of Sight Cannon, and several unmanned aerial vehicles. The Army has embarked on an impressive plan to mitigate risk using modeling, simulation, emulation, hardware in the loop, and system integration laboratories throughout FCS development. This is a credible approach designed to reduce the dependence on late testing to gain valuable information about design progress. However, on a first-of-a-kind system like the FCS that represents a radical departure from current systems, actual testing of all the components integrated together is the final proof that the system works both as predicted and as needed. The risks the FCS program faces in executing the acquisition strategy can be seen in the information network and the manned ground vehicles. These two elements perhaps represent the long poles in the program and upon which the program’s success depends. The Joint Tactical Radio System (JTRS) and Warfighter Information Network-Tactical (WIN-T) are central pillars of the FCS network. If they do not work as intended, battlefield information will not be sufficient for the Future Force to operate effectively. They are separate programs from the FCS, and their costs are not included in the costs of the FCS. Both JTRS and WIN-T face significant technical challenges and aggressive schedules, that threaten the schedule for fielding Future Force capabilities and make their ultimate ability to perform uncertain. JTRS is a family of radios that is to provide the high-capacity, high-speed information link to vehicles, weapons, aircraft, and soldiers. Because the radios are software-based, they can also be reprogrammed to communicate with the variety of radios currently in use. JTRS is to provide the warfighter with the capability to access maps and other visual data, communicate on-the-move via voice and video with other units and levels of command, and obtain information directly from battlefield sensors. JTRS can be thought of as the information link or network to support FCS units of action and the combat units on the scene that are engaged directly in an operation. In particular, its wideband networking waveform provides the “pipe” that will enable the FCS vehicles to see and strike first and avoid being hit. The WIN-T program is to provide the information network for higher military echelons. WIN-T will consist of ground, airborne, and space-based assets within a theater of operations for Army, joint, and allied commanders and provide those commanders with access to intelligence, logistics, and other data critical to making battlefield decisions and supporting battlefield operations. This is information the combat units can access through WIN-T developed equipment and JTRS. The JTRS program to develop radios for ground vehicles and helicopters— referred to as Cluster 1—began system development in June 2002 with an aggressive schedule, immature technologies, and lack of clearly defined and stable requirements. These factors have contributed to significant cost, schedule, and performance problems from which the program has not yet recovered. The Army has not been able to mature the technologies needed to provide radios that both generate sufficient power and meet platform size and weight constraints. Changes in the design are expected to continue after the critical design review, and unit costs may make the radios unaffordable in the quantities desired. Given these challenges, the Army has proposed delaying the program 24 months and adding $458 million to the development effort. However, before approving the restructure, the Office of the Secretary of Defense directed a partial work stoppage, and the program is now focusing its efforts on a scheduled operational assessment of the radio’s functionality to determine the future of the program. Consequently, the radio is not likely to be available for the first spiral of the FCS network, slated for fiscal year 2008, and surrogate radios may be needed to fill the gap. A second JTRS program, to develop small radios including those that soldiers will carry (referred to as Cluster 5), also entered system development with immature technologies and lack of well-defined requirements, and faces even greater technical challenges due to the smaller size, weight, power, and large data processing requirements for the radios. For example, the Cluster 5 program has a requirement for a wideband networking waveform despite its demanding size and power constraints. In addition, the program was delayed in starting system development last year because of a contract bid protest. Consequently, the Cluster 5 radios are not likely to be available for the first FCS spiral either. The Army has acknowledged that surrogate radios and waveforms may be needed for the first spiral of FCS. The WIN-T program also began with an aggressive acquisition schedule and immature technologies that are not scheduled to mature until after production begins. Backup technologies have been identified, but they offer less capability, and most are immature as well. In addition, the schedule leaves little room for error correction and rework that may hinder successful cost, schedule and performance outcomes. More recently, the program strategy was altered to identify a single architecture as soon as possible and to deliver networking and communications capabilities sooner to meet near-term warfighting needs. Specifically, the Army dropped its competitive strategy and is now having the two contractors work together to develop the initial network architecture. A plan for how to develop and field capabilities sooner is still to be determined. FCS includes eight manned ground vehicles, that require critical individual and common technologies to meet required capabilities. For example, the Mounted Combat System will require, among other new technologies, a newly developed lightweight weapon for lethality; a hybrid electric drive system and a high-density engine for mobility; advanced armors, an active protection system, and advanced signature management systems for survivability; a Joint Tactical Radio System with the wideband waveform for communications and network connection; a computer-generated force system for training; and a water generation system for sustainability. At the same time, concepts for the manned ground vehicles have not been decided and are awaiting the results of trade studies that will decide critical design points such as weight and the type of drive system to be used. Under other circumstances, each of the eight manned ground systems would be a major defense acquisition program on par with the Army’s past major ground systems such as the Abrams tank, the Bradley Fighting Vehicle, and the Crusader Artillery System. As such, each requires a major effort to develop, design, and demonstrate the individual vehicles. Developing these technologies and integrating them into vehicles is made vastly more difficult by the Army’s requirement that the vehicles be transportable by the C-130 cargo aircraft. However, the C-130 can carry the FCS vehicles’ projected weight of 19 tons only 5 percent of the time. In 2004, GAO reported a similar situation with the Stryker vehicles. The 19- ton weight of these vehicles significantly limits the C-130’s range and the size of the force that can be deployed. Currently, FCS vehicle designs are estimated at over 25 tons per vehicle. To meet even this weight, the advanced technologies required put the sophistication of the vehicles on a par with that of fighter aircraft, according to some Army officials. This is proving an extremely difficult requirement to meet without sacrificing lethality, survivability, and sustainability. Currently, program officials are considering other ways to meet the C-130 weight requirement, such as transporting the vehicles with minimal armor and with only a minimal amount of ammunition. As a result, vehicles would have to be armored and loaded upon arrival to be combat ready. The low levels of knowledge in the FCS program provide an insufficient basis for making cost estimates. The program’s immaturity at the time system development and demonstration began resulted in a relatively low- fidelity cost estimate and open questions about the program’s long-term affordability. Although the program restructuring provides more time to resolve risk and to demonstrate progress, the knowledge base for making a confident estimate is still low. If the FCS cost estimate is not better than past estimates, the likelihood for cost growth will be high, while the prospects for finding more money for the program will be dim. The estimates for the original FCS program and the restructured program are shown in table 2 below. At this point, the FCS cost estimate represents the position of the program office. The Army and the Office of the Secretary of Defense’s Cost Analysis Improvement Group will provide their independent estimates for the May 2005 Milestone B update review. It is important to keep in mind that the FCS program cost estimate does not reflect all of the costs needed to field FCS capabilities. The costs of the complementary programs are separate and will be substantial. For example, the research and development and procurement costs for the JTRS (Clusters 1 and 5) and the WIN-T programs are expected to be about $34.6 billion (fiscal year 2005 dollars). In addition, by April 2005, the Army has been tasked to provide an analysis of FCS affordability considering other Army resource priorities, such as modularity. This will be an important analysis, given that estimates of modularity costs have been put at about $48 billion, and costs of current operations and recapitalizing current equipment have been covered by supplemental funding. As can be seen in table 3, substantial investments will be made before key knowledge is gained on how well the system can perform. For example, by the time of the critical design review in 2010, over $20 billion of research and development funds will have been spent. The consequences of even modest cost increases and schedule delays for the FCS would be dramatic. For example, a one-year delay late in FCS development, not an uncommon occurrence for other DOD programs, could cost over $3 billion. Given the size of the program, financial consequences of following historical patterns of cost and schedule growth could be dire. For any acquisition program, two basic questions can be asked. First, is it worth doing? Second, is it being done the right way? On the first question, the Army makes a compelling case that something must be done to equip its future forces and that such equipment should be more responsive but as effective as current equipment. The answer to the second question is problematic. At this point, the FCS presents a concept that has been laid out in some detail, an architecture or framework for integrating individual capabilities, and an investment strategy for how to acquire those capabilities. There is not enough knowledge to say whether the FCS is doable, much less doable within a predictable frame of time and money. Yet making confident predictions is a reasonable standard for a major acquisition program given the resource commitments and opportunity costs they entail. Against this standard, the FCS is not yet a good fit as an acquisition program. That having been said, another important question that needs to be answered is: If the Army needs FCS-like capabilities, what is the best way to advance them to the point at which they can be acquired? Efforts that fall in this area—the transition between the laboratory and the acquisition program—do not yet have a place that has the right organizations, resources, and responsibilities to advance them properly. At this point alternatives to the current FCS strategy warrant consideration. For example, one possible alternative for advancing the maturity of FCS capabilities could entail setting the first spiral or block as the program of record for system development and demonstration. Such a spiral should meet the standards of providing a worthwhile military capability, having mature technology, and having firm requirements. Other capabilities currently in the FCS program could be moved out of system development and demonstration and instead be bundled into advanced technology demonstrations that could develop and experiment with advanced technologies in the more conducive environment of “pre- acquisition” until they are ready to be put into a future spiral. Advancing technologies in this way will enable knowledge to guide decisions on requirements, lower the cost of development, and make for more reasonable cost and schedule estimates for future spirals. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or members of the subcommittee may have. For future questions about this statement, please contact me at (202) 512- 4841. Individuals making key contributions to this statement include Lily J. Chin, Marcus C. Ferguson, Lawrence D. Gaston, Jr., William R. Graveline, John P. Swain, Robert S. Swierczek, and Carrie R. Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
FCS is the core of Army efforts to create a lighter, more agile, capable force: a $108 billion investment to provide a new generation of 18 manned and unmanned ground vehicles, air vehicles, sensors, and munitions linked by an information network. Although system development and demonstration began in May 2003, the program was restructured in July 2004, including processes to make FCS capabilities available to current forces. GAO has been asked to assess (1) FCS technical and managerial challenges; (2) prospects for delivering FCS within cost and scheduled objectives; and (3) options for proceeding. In its unprecedented complexity, FCS confronts the Army with significant technical and managerial challenges in its requirements, development, finance, and management. Technical challenges include the need for FCS vehicles to be smaller, weigh less, and be as lethal and survivable as current vehicles, which requires (1) a network to collect and deliver vast amounts of intelligence and communications information and (2) individual systems, such as manned ground vehicles, that are as complex as fighter aircraft. Its cost will be very high: its first increment--enough to equip about 1/3 of the force--will cost over $108 billion, with annual funding requests running from $3 billion to $9 billion per year. The program's pace and complexity also pose significant management challenges. The Army is using a Lead System Integrator to manage FCS and is using a contracting instrument--Other Transaction Agreement--that allows for more flexible negotiation of roles, responsibilities, and rights with the integrator. The FCS is at significant risk for not delivering required capability within budgeted resources. Currently, about 9 1/2 years is allowed from development start to production decision. DOD typically needs this period of time to develop a single advanced system, yet FCS is far greater in scope. The program's level of knowledge is far below that suggested by best practices or DOD policy: Nearly 2 years after program launch and with $4.6 billion invested, requirements are not firm and only 1 of over 50 technologies are mature. As planned, the program will attain the level of knowledge in 2008 that it should have had in 2003, but things are not going as planned. Progress in critical areas--such as the network, software, and requirements--has in fact been slower, and FCS is therefore likely to encounter problems late in development, when they are very costly to correct. Given the scope of the program, the impact of cost growth could be dire. To make FCS an effective acquisition program different approaches must be considered, including (1) setting the first stage of the program to demonstrate a worthwhile military capability, mature technology, and firm requirements; and (2) bundling its other capabilities into advanced technology demonstrators until they can be put in a future stage, which will provide guidance for decisions on requirements, lower the cost of development, and make for more reasonable cost and schedule estimates for future stages.
FDA’s mission is to protect the public health by ensuring the safety and effectiveness of human drugs marketed in the United States. The agency’s responsibilities begin years before a drug is marketed and continue after a drug’s approval. FDA oversees the drug development process. Among other things, FDA reviews drug sponsors’ proposals for conducting clinical trials, assesses drug sponsors’ applications for the approval of new drugs, and publishes guidance for industry on various topics. Once drugs are marketed in the United States, FDA has the responsibility to continue to monitor their safety and efficacy and to enforce drug sponsors’ compliance with applicable laws and regulations. FDA also annually publishes a list of drugs approved for sale within the United States, the Approved Drug Products with Therapeutic Equivalence Evaluations, also known as the Orange Book. In addition, since February 2005, FDA has provided updates via the Electronic Orange Book on brand-name drug approvals the month they are approved and on generic drug approvals daily. FDA’s Center for Drug Evaluation and Research is responsible for ensuring the safety and efficacy of drugs. Within this center, the Office of New Drugs is responsible for reviewing new drug applications (NDA), while the Office of Generic Drugs is responsible for reviewing applications for generic drugs, which are abbreviated new drug applications (ANDA). NDAs and ANDAs must be submitted by sponsors and approved by FDA before a new brand-name or generic drug can be marketed in the United States. As part of the approval process, FDA reviews proposed labeling for both brand-name and generic drugs; a drug cannot be marketed without an FDA-approved label. Among other things, a drug’s label contains information for health care providers and specifically cites the conditions and populations the drug has been approved to treat, as well as effective doses of the drug. Sponsors of both new brand-name and generic drugs are required to submit annual reports to FDA that include, for example, updates about the safety and effectiveness of their drugs; these annual reports are one way FDA monitors the safety and efficacy of drugs once they are available for sale. Manufacturers may submit an ANDA to FDA to seek approval to market a generic version of the drug after the period of exclusivity and any patents for a brand-name drug expire. FDAAA contained three provisions related to antibiotic effectiveness and innovation, each of which required FDA to take certain actions. One provision required FDA to identify breakpoints “where such information is reasonably available,” to periodically update them, and to make these up- to-date breakpoints publicly available within 30 days of identifying or updating them. A second provision extended the duration of market exclusivity from 3 years to 5 years for new drugs that meet certain detailed, scientific criteria. be for a new drug consisting of a single enantiomer of a previously approved racemic drug. The application for the drug must also be submitted for approval in a different therapeutic category than the previously approved drug and meet certain other requirements. FDAAA specified that FDA use the therapeutic categories established by the United States Pharmacopeia to determine whether an application has been submitted for a separate therapeutic category than the previously approved drug.categories developed by this organization that were in effect on the date of the enactment of FDAAA. The provision applies to new drugs of any type that meet the criteria, not just antibiotics. Pub. L. No. 110-85, § 1113, 121 Stat. 823, 976-77 (2007). serious and life-threatening infectious diseases.to counter some of the business risks a drug sponsor must undertake when developing antibiotics. For example, the Orphan Drug Act provides incentives including a 7-year period of marketing exclusivity to sponsors of approved orphan drugs, a tax credit of 50 percent of the cost of conducting human clinical testing, research grants for clinical testing of new therapies to treat orphan diseases, and exemption from the fees that are typically charged when sponsors submit NDAs for FDA’s review. Sponsors may also be eligible for a faster review of their applications for market approval. Sponsors of all drugs are required to keep the information on their drug labels accurate. Unlike labels for most other types of drugs, labels for antibiotics contain breakpoints. These breakpoints may continue to change over time, and the sponsors of antibiotics are tasked with the additional responsibility of maintaining up-to-date breakpoints on labels. Although sponsors are required to maintain up-to-date breakpoints on their labels, FDA has acknowledged that many antibiotics are labeled with outdated breakpoints. Outdated breakpoints can result in health care providers unknowingly selecting ineffective treatments, which can also contribute to additional bacterial resistance to antibiotics. Monitoring breakpoints on labels and keeping them up to date can be a challenging process. The most accurate way to monitor and determine if a breakpoint on a label is up to date is to conduct both clinical trials and laboratory studies, but these can be difficult and expensive and may not be appropriate in all circumstances. For example, clinical trials require the enrollment of large numbers of patients, which may be difficult to achieve, to ensure an understanding of a drug’s safety and effectiveness against specific bacteria. Enrollment may also be difficult for clinical trials involving antibiotic-resistant bacteria. Unlike clinical trials for a new cancer drug, for example, where researchers are able to target drugs to a patient population with a specific type of cancer, this may not necessarily be the case for antibacterial drugs. There are no rapid diagnostic tests available to help a researcher identify patients with antibiotic-resistant infections who would be eligible for such trials. Laboratory studies, such as susceptibility testing, can be less costly than clinical trials; however, they still require significant microbiology expertise. Susceptibility testing reveals an antibiotic’s breakpoint—that is, its ability to kill or inhibit the growth of a specific bacterial pathogen. As such, the results of such tests can provide a sponsor with some data to help update its antibiotic label with more accurate information. Guidelines for developing appropriate susceptibility tests are available from standards-setting organizations, such as the Clinical and Laboratory Standards Institute. Sponsors may obtain information from such organizations to help them conduct susceptibility tests for their antibiotics or otherwise determine if the breakpoints on their antibiotic labels are up to date. According to FDA officials, much of this information is available free online and at conferences. When new information becomes available that may cause the label to become inaccurate, false, or misleading—such as information on increased bacterial resistance to antibiotics—drug sponsors are responsible for updating their drug labels. Label changes of this type require FDA’s approval. A sponsor must submit an application supplement to FDA with evidence to support the need for a label change. A sponsor’s responsibility for maintaining a drug’s label persists throughout the life cycle of the drug—that is, from the time the drug is first approved until FDA withdraws its approval of the drug. A drug is not considered withdrawn until FDA publishes a Federal Register notice officially announcing its withdrawal. A sponsor may also decide to discontinue manufacturing a drug without withdrawal. Sponsors that decide to discontinue marketing a drug are still responsible for maintaining accurate labels. Unlike a drug that is withdrawn, a discontinued drug for which approval has not been withdrawn is one that the sponsor has stopped marketing, but that it may resume marketing without obtaining permission to do so from FDA. Discontinued drugs are identified as such in the discontinued section of the Orange Book. Federal regulations allow ANDA’s labels to differ from the label of the corresponding reference-listed drug in certain ways, such as manufacturer name or expiration date. See 21 C.F.R. § 314.94(a)(8)(iv) (2011). to-date breakpoints, into their generic drugs’ labels. A drug maintains its reference-listed drug designation until its approval is withdrawn or a finding is made by FDA that a discontinued reference-listed drug was In either withdrawn from the market for safety or effectiveness reasons.of these cases, FDA will designate a different drug as the reference-listed drug and publish this change in the Orange Book. FDA will generally designate the generic version of the drug with the largest market share as the new reference-listed drug. In this case, the labels of other generic versions of the drug will be expected to follow the label of the newly designated generic, reference-listed drug. FDA has not taken sufficient steps to implement the FDAAA provision regarding preserving antibiotic effectiveness by ensuring that antibiotic labels contain up-to-date breakpoints. In 2008 FDA requested that sponsors respond to the agency regarding whether their antibiotics’ labels included up-to-date breakpoints, but FDA has not yet confirmed whether the majority of these labels are accurate. FDA also took the step of issuing guidance in 2009 on sponsors’ responsibility to maintain up-to- date breakpoints on their antibiotics’ labels, but the agency has not been systematically tracking sponsors’ responsiveness. Although FDA has taken steps to update breakpoint information on antibiotic labels, as of November 2011, it has not confirmed that the information is up to date for most reference-listed antibiotics. As one step in FDA’s efforts to implement the FDAAA provision regarding antibiotic effectiveness, FDA identified 210 antibiotics and, in January and February 2008, sent letters to the sponsors of these drugs reminding them of the importance of regularly updating the breakpoints on their antibiotic labels. In addition, the letters requested that sponsors evaluate and maintain the currency of breakpoints included on their labels and within 30 days submit evidence to FDA showing that the breakpoints were either current or needed revision. Sponsors that could not submit this evidence within 30 days were advised to provide the agency with a timetable for when they expected to respond with this information. If sponsors determined that their antibiotic labels needed revision, the agency’s letter instructed them to submit a label supplement. FDA’s letters also highlighted to sponsors that all subsequent annual reports should include an evaluation of these breakpoints and document the status of any needed changes to the antibiotic label. As of November 2011, over 3.5 years after FDA sent its letters, 146, or 70 percent, of the 210 antibiotics are still labeled with breakpoints that have not been updated or confirmed to be up to date. For 78 of the 146 antibiotics, FDA has not yet received a submission regarding the currency of the breakpoints; for 12 of the antibiotics, the sponsors’ submissions are pending FDA review; and for 56 of the antibiotics, FDA determined that the sponsors’ submission was inaccurate or incomplete and therefore requested a revision or additional information. Thus far, FDA has determined that 64, or 30 percent, of the 210 antibiotics have up-to-date breakpoints (see fig. 1). (See app. II for more details on the status of the labels of the 210 antibiotics.) One reason so many antibiotics still have breakpoints that FDA has not confirmed to be up to date is that many sponsors have not fulfilled the responsibilities outlined in FDA’s 2008 letters. FDA officials stated that the agency has followed up with sponsors that had not responded at all to the 2008 letters; however, it did not begin to do so until 2010—2 years after it asked sponsors to respond within 30 days—and two sponsors have still not informed FDA when they intend to submit the requested information. FDA officials told us that they routinely monitor the status of all requested submissions that they have not yet received. In particular, they told us that they have contacted sponsors to set time frames for submitting the requested information, and that they follow up with sponsors that do not submit information within the time frames established. FDA has not pursued regulatory action against any of these sponsors. FDA officials stated that the agency could take regulatory action against a sponsor whose label contained outdated breakpoints, as federal regulations require all sponsors of drugs to maintain accurate labels. However, the officials added that in order for FDA to take regulatory action against a sponsor, FDA would first have to be able to prove that the breakpoint on the antibiotic label was not up to date. Another reason many antibiotics still have breakpoints on their labels that FDA has not confirmed to be up to date is that FDA faced difficulty in keeping up with the workload that resulted from sponsors’ breakpoint submissions. According to FDA officials, it should take 1 to 3 months for the agency to review such submissions when staff are available and the submissions include all of the necessary information. However, it took FDA longer than a year to review many of the submissions it received, and as of November 2011, FDA still had a backlog of five submissions from 2008. FDA officials identified four factors that have contributed to the lengthy time between when the agency received a submission and when it completed its review. First, FDA officials explained that the submissions sent in response to the agency’s 2008 letter generated a larger number of supplements than normal, adding significantly to FDA’s existing workload of label supplements. Second, some of the submissions required significantly more resources to review than typical label supplements, because of challenging scientific issues or difficulties obtaining data. Third, some of the sponsors’ submissions were inaccurate or did not include all necessary information. Fourth, FDA staff spent a significant amount of time answering questions from sponsors, tracking responses, and following up when needed. Some of the sponsors we obtained comments from expressed frustration at how long it took FDA to review their submissions, especially given that bacterial resistance to antibiotics is not static and breakpoints may continue to change over time. Specifically, 3 of the 26 sponsors we obtained comments from stated that they are concerned that the breakpoints they submitted may be outdated by the time FDA completes its review. One of these sponsors told us that it was advised by FDA to refrain from submitting new information before the agency completed its review of the sponsor’s previously submitted label supplement. According to the sponsor, FDA officials said that providing new information would result in the sponsor’s submission going to the end of FDA’s review queue. While the fact that breakpoints on the labels of 146 antibiotics may not be up to date is troubling, there are additional reasons for concern. First, nearly all of these 146 antibiotics are reference-listed drugs—thus, in addition to the labels of these drugs, the labels of the generic antibiotics that follow the labels of the reference-listed antibiotics are also uncertain. Second, because bacterial resistance to antibiotics is not static, some of the breakpoints for the 64 antibiotics that FDA has confirmed through its review as up to date may have since become out of date. Third, FDA’s list of 210 drugs did not include a complete list of all the antibiotics for which sponsors are responsible for evaluating and maintaining the breakpoints on their labels. For example, FDA did not include any brand-name drugs that were discontinued at the time the agency compiled its list, and also did not include some antibiotics that were reference-listed drugs at that time. FDA officials were unsure how many antibiotics were omitted, but estimated that the number was low. Given the uncertainty surrounding the 146 antibiotics whose breakpoints have not yet been confirmed as well as the antibiotics omitted from FDA’s 2008 request to sponsors, more than two-thirds of reference-listed antibiotic labels may contain out-of-date breakpoints. Another step FDA took to implement the FDAAA provision regarding preserving the effectiveness of antibiotics was to issue guidance that reminded sponsors of the requirement to maintain accurate labels, and thus, their responsibility to keep information about breakpoints up to FDA officials stated that in part because the agency received date.questions in response to its 2008 letters, officials determined that it would be useful to issue guidance. FDA first issued draft guidance in June 2008 and finalized it a year later, in June 2009. The guidance specified that the sponsors of brand-name and generic antibiotics that are designated as reference-listed drugs are responsible for evaluating their breakpoints on labels at least annually and should include this evaluation in the sponsor’s annual report to FDA. When we asked for clarification as to whether the guidance language limited this responsibility to the sponsors of those brand-name antibiotics that are reference listed, FDA officials told us that the guidance applied to sponsors of all brand-name antibiotics—both those that were and were not reference listed, including those that are discontinued—as well as sponsors of reference-listed, generic antibiotics. The guidance also described approaches sponsors could take to determine up-to-date breakpoints for their antibiotics. While FDA’s 2008 letters to certain sponsors communicated much of the same information, FDA’s guidance was the first time that FDA specified (1) which sponsors are responsible for evaluating their breakpoints, including that this responsibility applied to sponsors of generic, reference-listed antibiotics, and (2) the frequency with which sponsors needed to perform these evaluations. FDA has not been systematically tracking whether sponsors have been responsive to the guidance. Specifically, FDA does not know what percentage of antibiotic annual reports have included an evaluation of breakpoints. At our request, FDA reviewed a small sample of annual reports and this review suggested that sponsors’ responsiveness to the annual reporting responsibility is low. FDA reviewed the most recent annual reports for 19 of the 64 antibiotics that FDA confirmed to be labeled with up-to-date breakpoints after receiving a response to the agency’s 2008 letters. FDA found that 10 of the 19, or just over half, of these annual reports included an evaluation of the antibiotics’ breakpoints. group of antibiotics—that is, those for which a sponsor already responded to FDA’s 2008 letter with a submission regarding the currency of their breakpoints—the overall rate for all antibiotics is likely even lower. FDA looked at a subset of the 64 antibiotics that were also brand-name drugs and for which the sponsor had submitted its most recent annual report electronically. Three of the 19 antibiotics in FDA’s sample had annual reports that noted that a label supplement was recently approved but had not been implemented in time to be reflected in the report. Because bacterial resistance to antibiotics is not static, sponsors that do not follow the guidance by evaluating their breakpoints on a regular basis and sharing the results of their evaluation with FDA are unlikely to be able to maintain accurate labels. FDA officials stated that they plan to track compliance with the guidance in one of the agency’s drug databases by January 1, 2012. FDA plans to have all annual reports for antibiotics reviewed by FDA microbiologists who will use a standardized form to document the assessment of the antibiotics’ breakpoints. In addition, the agency plans to track whether the annual report included an evaluation of the antibiotics’ breakpoints in an FDA database. FDA plans to follow up with sponsors that do not include a complete evaluation of antibiotic breakpoints in their annual reports to inform them about what information was missing. Some sponsors, particularly sponsors of generic, reference-listed antibiotics, may not be following FDA’s guidance because they are confused as to whether the responsibility to evaluate and maintain up-to- date breakpoints on their labels, as described in the guidance, applies to them. Fifteen sponsors we obtained comments from manufactured at least one generic, reference-listed antibiotic—all were responsible for evaluating and maintaining their breakpoints. Of these 15, 7 sponsors expressed some form of confusion regarding their responsibility. Five of these 7 sponsors stated that their strategy for ensuring that the breakpoints on their generic antibiotic labels were up to date was to follow the breakpoints on the label of the corresponding brand-name drug. Two of the 5 were even more specific and added that their generic antibiotics were only designated reference-listed drugs “by default” and that their strategy was to follow the label of the brand-name drug—even if the brand-name drug was discontinued. One other sponsor was unsure whether any of its generic antibiotics were reference-listed drugs or what implications such a designation would have. A seventh sponsor understood the responsibilities associated with having a generic antibiotic that was designated a reference-listed drug, but was under the impression that its generic antibiotic was not a reference-listed drug. FDA officials told us that it is a sponsor’s responsibility to routinely monitor FDA’s Orange Book to determine if any of its drugs become designated a reference-listed drug. However, FDA’s June 2009 guidance is silent on sponsors’ responsibility to consistently monitor the Orange Book to determine if one of their drugs has become, or ceases to be, a reference-listed drug. The officials acknowledged that there is no process or mechanism for notifying sponsors when one of their drugs becomes, or is no longer, a reference-listed drug. The guidance was also not explicit about FDA’s view that the responsibility described in the guidance also applied to sponsors of discontinued brand-name antibiotics. The guidance also explained that FDA intended to comply with FDAAA’s requirement that it identify, periodically update, and make publicly available up-to-date breakpoints by using two approaches. First, the guidance explained that the agency would review breakpoints referenced in the labeling of individual drug products and post any approved labels on the Internet. FDA officials told us that this is the approach FDA has thus far used to make up-to-date breakpoints publicly available. Second, FDA’s guidance also stated that it would, when appropriate, recognize standards used to determine breakpoints from one or more standards- FDA setting organizations and publish these in the Federal Register.has not yet used this approach and did not mention a specific plan or timetable to do so. FDA officials told us that publishing this information in the Federal Register could make the review process quicker as sponsors would then have ready access to standards already recognized by FDA. For example, publishing this information may be helpful for some sponsors, such as those that do not have the microbiology expertise to update their own breakpoints. While FDA officials said that they have been making updated breakpoints publicly available, the agency’s guidance regarding these alternative approaches may be causing confusion among some sponsors that are anticipating the publication of breakpoints from standards-setting organizations in the Federal Register. This was the case for one sponsor we obtained comments from, which stopped purchasing data from a standard-setting organization because it believed FDA would be publishing recognized standards in the Federal Register. The FDAAA provision that grants extended market exclusivity has not resulted in any sponsors submitting NDAs for antibiotics that qualify for this exclusivity. Additionally, as required by FDAAA, FDA held a public meeting to discuss incentives, such as those available under the Orphan Drug Act, to encourage antibiotic innovation. However, no changes were made to the availability of current incentives nor were any new incentives established following the public meeting. To date, drug sponsors, including those we received comments from, have not submitted any NDAs for antibiotics as a result of the FDAAA provision granting additional market exclusivity for new drugs containing single enantiomers of previously approved racemic drugs. According to FDA officials, they have received very few inquiries regarding this provision and as of November 2011, no NDAs for antibiotics have been submitted that would qualify for this exclusivity. FDA officials noted that because it is a narrowly targeted provision, they are unsure if any existing racemic drug could qualify. None of the drug sponsors from which we obtained comments said that this FDAAA provision provided a sufficient incentive to develop a new antibiotic of this type. FDA officials stated that it was unlikely that this provision would have an impact on antibiotic innovation. The officials stated that the requirement that the single enantiomer of the approved drug be in a separate therapeutic category would be challenging for antibiotic sponsors to meet. The officials noted that this market exclusivity was not limited to antibiotics. One drug sponsor we spoke with stated that it is pursuing this market exclusivity for a drug that is not an antibiotic. The lack of NDAs for antibiotics submitted in response to this FDAAA provision is consistent with the overall trend in the approval of innovative antibiotic NDAs. The number of annual approvals of antibiotic NMEs from 2001 through 2010 has not changed significantly since the passage of FDAAA. Specifically, the annual number of antibiotic NME approvals was two or less for the years prior to, and one or less for the years following, the enactment of FDAAA. Because drug development is a lengthy process—sponsors spend, on average, 15 years developing a new drug—it may be too early to ascertain the full impact of FDAAA on antibiotic innovation. However, the extended exclusivity provided for in FDAAA is only available to sponsors submitting qualifying NDAs before October 1, 2012. As required by FDAAA, FDA held a public meeting on April 28, 2008, to explore whether and how existing incentives and potential new incentives could be applied to promote the development of antibiotics as well as to discuss whether infectious diseases may qualify for grants or other incentives that may promote innovation. The meeting provided an opportunity to gather input from stakeholders and address their concerns. However, although potential new incentives and changes to current ones were suggested at the meeting, many of these suggestions—such as tax incentives and extended market exclusivities—would require a statutory change. One of the discussion topics at the public meeting related to the circumstances under which antibiotics could qualify for incentives provided under the Orphan Drug Act, which is intended to stimulate the development of drugs for rare diseases—conditions that affect fewer than 200,000 people in the United States. Following the public meeting, FDA responded in writing to an inquiry from one stakeholder to clarify that an antibiotic could qualify for an orphan drug designation when the drug’s use is restricted to the treatment of a small population of patients with an infection caused by a specific pathogen. Our examination of FDA data suggests that orphan drug designation is not common for antibiotics. These data show that the annual number of antibiotics that received an orphan drug designation from 2001 to 2007—when FDAAA was enacted—was three drugs or fewer each year. The number of antibiotics that received orphan drug designation following FDAAA’s enactment in 2007 has remained constant at this rate through 2010. Additionally, not all antibiotics that have been awarded orphan drug designation have been or will apply to be approved for marketing. Of the 15 antibiotics that received an orphan drug designation from 2001 through 2010, only 1 was approved for marketing as of November 2011. In addition to discussing the applicability of the Orphan Drug Act, the agency gathered input during the public meeting from drug sponsors and other parties—such as those in academia and professional associations—on serious and life-threatening infectious diseases, antibiotic resistance, and incentives for antibiotic innovation. The incentives mentioned as useful mechanisms to encourage the innovation and marketing of antibiotics were both financial and regulatory in nature and are summarized in table 1. The growing public health threat associated with bacterial resistance to antibiotics makes the development of new antibiotics critical. Although FDAAA contained a provision to encourage the development of certain antibiotics, no sponsor has submitted an application for a new drug that meets the law’s specific criteria. FDAAA also recognized that up-to-date breakpoints are vital to preserving the effectiveness of antibiotics. Antibiotic labels containing out-of-date breakpoints can lead clinicians to choose less effective treatments and provide additional opportunities for bacteria to develop resistance. Out-of-date breakpoints on labels of reference-listed antibiotics also have a ripple effect on the accuracy of the labels of other antibiotics because other sponsors must match the labels of the corresponding reference-listed drugs. However, more than 4 years after FDAAA’s enactment, there continues to be uncertainty about the accuracy of the labels of more than two thirds of reference-listed antibiotics, as well as those of the generic antibiotics that are required to follow these drugs’ labels. The steps FDA has taken since the enactment of FDAAA have been insufficient to ensure that all antibiotics have up-to-date breakpoints on their labels. The agency has acted with neither decisiveness nor a sense of urgency. First, FDA has not yet completed reviewing the submissions it received in response to its 2008 request and many sponsors still have not provided FDA with needed information. Further, FDA officials told us that they sent letters to sponsors of 210 antibiotics. These sponsors were responsible for evaluating and maintaining, and if necessary, updating the breakpoints on their labels; however, FDA’s request was not made to all the antibiotic sponsors that held this responsibility. While the agency did follow up with sponsors, this was not done in a timely manner. FDA’s review of sponsors’ submissions has also been time-consuming; given that sponsors are expected to provide information on the effectiveness of these breakpoints annually. It is unclear how the agency plans to keep up with this workload if sponsors’ fulfillment of this responsibility improves. Second, FDA’s issuance of guidance to specify the responsibilities of antibiotics’ sponsors to evaluate breakpoints appears to have been unsuccessful at encouraging all sponsors to fulfill these responsibilities. The comments we received from drug sponsors indicate that some antibiotic sponsors remain confused about this responsibility—either because they did not know that their antibiotics were reference-listed drugs or because they interpreted the June 2009 FDA guidance differently than FDA intended. Without formal notification that their antibiotics have been designated as reference-listed drugs and a clarification of their responsibilities, sponsors may continue to be unaware of, or have differing interpretations of a responsibility that ultimately helps preserve antibiotic effectiveness. The pace of FDA’s actions—many of which remain incomplete—means that the majority of antibiotics we examined may have out-of-date breakpoints on their labels that could result in the prescription of ineffective treatments by health care providers and further contribute to antibiotic resistance. This requires concerted action on the part of the agency to help preserve the effectiveness of currently available antibiotics. We recommend that the Commissioner of FDA take the following six actions to help ensure that antibiotics are accurately labeled: expeditiously review sponsors’ submissions regarding the breakpoints on their antibiotics’ labels; take steps to obtain breakpoint information from sponsors that have not yet submitted breakpoint information in response to the 2008 letters sent by the agency; ensure that all sponsors responsible for the annual review of breakpoints on their antibiotics’ labels—including discontinued brand- name antibiotics and reference-listed antibiotics designated since 2008—have been reminded of their responsibility to evaluate and maintain up-to-date breakpoints; establish a process to track sponsors’ submissions of breakpoint information included in their annual reports to ensure that such information is submitted to FDA and reviewed by the agency in a timely manner; notify sponsors when one of their drugs becomes or ceases to be a clarify or provide new guidance on which antibiotic sponsors are responsible for annually evaluating and maintaining up-to-date breakpoints on drug labels. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix IV. In its comments, HHS acknowledged the importance of updating antibacterial breakpoints and said that FDA is committed to ensuring that breakpoint information on drug labels is up to date. Although HHS did not specifically indicate whether it agreed with our recommendations, the agency stated that it will consider all of them as it continues to improve its processes to ensure that antibacterial drug labels contain up-to-date breakpoint information. HHS also stated that FDA has already taken steps to expedite the review of sponsor submissions regarding updated breakpoint information, which is consistent with our recommendations. In addition, HHS expressed concern that our report did not fully capture the challenges associated with updating the labels of antibacterial drugs. HHS summarized the approach FDA used to address the provision in FDAAA related to antibiotic effectiveness and highlighted the challenges sponsors face in obtaining currently relevant and adequate scientific data to assess antibiotic breakpoints. However, we believe that our report accurately describes the same actions that HHS outlined in its comments. Similarly, we believe that our report acknowledges the challenges surrounding sponsors’ responsibility to maintain up-to-date breakpoints. We recognize that these challenges pose difficulties for both sponsors and FDA. However, FDA is ultimately responsible for ensuring that drugs, including antibiotics, are safe and effective. Despite the agency’s efforts, 4 years have elapsed since FDA first began contacting drug sponsors regarding the accuracy of the breakpoints on 210 of their antibiotics’ labels. Yet there continues to be uncertainty about the accuracy of the labels for two-thirds of these drugs. Given the serious threat to public health posed by antibiotic resistance, we believe that it is important that our recommendations are implemented, in order to help preserve the effectiveness of these critical drugs. Finally, HHS provided us with new information, reporting that as of December 12, 2011, the labeling for 66 antibacterial drugs has been updated or found to be correct. This is an increase of 2 antibacterial drugs, up from the 64 antibacterial drugs that are cited in our report. We include this information here, but did not revise our report, as HHS did not provide a complete update regarding all of the 210 antibiotics discussed in this report. HHS also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. As one step in FDA’s efforts to implement the provision in the Food and Drug Administration Amendments Act of 2007 regarding antibiotic effectiveness, FDA identified 210 antibiotics for which sponsors were responsible for evaluating and maintaining and, if necessary, updating the breakpoints on their antibiotics’ labels. In January and February of 2008, FDA sent letters to the sponsors of these drugs reminding them of the importance of regularly updating the breakpoints on their antibiotic labels. In addition, the letters requested that sponsors evaluate and maintain the currency of breakpoints included on their labels and within 30 days submit evidence to FDA showing that the breakpoints were either current or needed revision. Of the 210 antibiotics, 126 were brand-name antibiotics and 84 were generic antibiotics, manufactured by 39 different sponsors. Table 2 identifies these 39 sponsors and whether the sponsor held a brand-name antibiotic, a generic antibiotic, or both. Number of antibiotics for which (NDA) Abbreviated new drug applications (ANDA) Appendix III: Timeline of FDA Implementation of Certain Food and Drug Administration Amendments Act Provisions See FDA, Guidance for Industry: Updating Labeling for Susceptibility Test Information in Systemic Antibacterial Drug Products and Antimicrobial Susceptibility Testing Devices (June 2009). In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Alison Binkowski; Ashley R. Dixon; Cathleen Hamann; Lisa Motley; Patricia Roy; Laurie F. Thurber; and Jocelyn Yin made key contributions to this report.
Antibiotics are critical drugs that have saved millions of lives. Growing bacterial resistance to existing drugs and the fact that few new drugs are in development are public health concerns. The Food and Drug Administration Amendments Act of 2007 (FDAAA) required the Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), to identify, periodically update, and make publicly available up-to-date breakpoints, the concentrations at which bacteria are categorized as susceptible to an antibiotic. Breakpoints are a required part of an antibiotic’s label and are used by providers to determine appropriate treatments. FDAAA provided a financial incentive for antibiotic innovation and required FDA to hold a public meeting on antibiotic incentives and innovation. FDAAA directed GAO to report on the impact of these provisions on new drugs. This report (1) assesses FDA’s efforts to help preserve antibiotic effectiveness by ensuring breakpoints on labels are up to date and (2) examines the impact of the antibiotic innovation provisions. GAO examined FDA data, guidance, and other documents; interviewed FDA officials; and obtained information from drug sponsors, such as manufacturers, that market antibiotics. FDA has not taken sufficient steps to ensure that antibiotic labels contain up-to-date breakpoints. FDA designates certain drugs as “reference-listed drugs” and the sponsors of these drugs play an important role in ensuring the accuracy of drug labels. Reference-listed drugs are approved drug products to which generic versions are compared. As of November 2011, FDA had not yet confirmed whether the breakpoints on the majority of reference-listed antibiotics labels were up to date. FDA contacted sponsors of 210 antibiotics in early 2008 to remind sponsors of the importance of maintaining their labels and requested that they assess whether the breakpoints on their drugs’ labels were up to date. Sponsors were asked to submit evidence to FDA showing that the breakpoints were either current or needed revision. As of November 2011, over 3.5 years after FDA contacted sponsors, the agency had not yet confirmed whether the breakpoints on the labels of 70 percent, or 146 of the 210 antibiotics, were up to date. FDA has not ensured that sponsors have fulfilled the responsibilities outlined in the early 2008 letters. For those submissions FDA has received, it has often taken over a year for FDA to complete its review. Officials attributed this delay to reviewers’ workload, challenging scientific issues or difficulties in obtaining needed data, and incomplete submissions. FDA also issued guidance to clarify sponsors’ responsibility to evaluate and maintain up-to-date breakpoints. The guidance reminded sponsors that they are required to maintain accurate labels and stated that certain sponsors should submit an evaluation of breakpoints on their antibiotic labels to FDA annually. However, FDA has not been systematically tracking whether sponsors are providing these annual updates. Some sponsors remain confused about their responsibility to evaluate and maintain up-to-date breakpoints. At GAO’s request, FDA reviewed a small sample of annual reports and determined that few sponsors appear to be responsive to the guidance. The FDAAA provisions related to antibiotic innovation have not resulted in the submission of new drug applications for antibiotics. FDAAA extended the period of time that sponsors of new drugs that meet certain criteria have exclusive right to market the drug. According to FDA officials, the agency has received very few inquiries regarding this provision and, as of November 2011, no new drug applications for antibiotics have been submitted that would qualify for this exclusivity. None of the drug sponsors GAO received comments from said that this provision provided sufficient incentive to develop a new antibiotic of this type. FDAAA also required that FDA hold a public meeting to discuss whether and how existing or potential incentives could be applied to promote the development of antibiotics. Both financial and regulatory incentives were discussed at FDA’s 2008 meeting, including tax incentives for research and development and providing greater regulatory clarity during the drug approval process. GAO recommends that the Commissioner of FDA take steps to help ensure antibiotic labels contain up-to-date information, such as by expediting the agency’s review of breakpoint submissions. HHS said it will consider implementing GAO’s recommendations.
ANCs, Indian tribes and NHOs—i.e., the parent entities of tribal 8(a) firms—can be large, with worldwide operations and revenues in the hundreds of millions of dollars. They can own 8(a) and non-8(a) subsidiaries and sometimes form complicated corporate structures. Figure 1 illustrates a notional corporate structure of an ANC with a holding company—a non-8(a) subsidiary that provides shared administrative services to other subsidiaries for a fee. The figure depicts a mix of 8(a) and non-8(a) subsidiaries that may be only partly owned by the ANC. According to SBA, in fiscal year 2010, there were over 8,400 firms in the 8(a) program, 354 of which were owned by a tribal entity. For any firm (including tribal firms) to be eligible to participate in the 8(a) program, it must qualify as small under a primary industry size standard as measured by the number of employees or average revenues from the previous 3 years. In addition, the firm must be, among other things, majority-owned by one or more socially and economically disadvantaged individuals or a qualified entity, such as a tribal entity. Firms approved as 8(a) participants can receive business development assistance from SBA and are eligible to receive contracts that agencies offer to SBA for the 8(a) program. All 8(a) firms, including tribal 8(a) firms, are subject to a 9-year limit on participation in the 8(a) program. During the last 5 years in the program, known as a transitional period, firms are required to obtain a certain percentage of non-8(a) revenue to demonstrate their progress in developing a viable business that is not solely reliant on the 8(a) program. SBA’s district offices are responsible for tracking this business mix on an annual basis. If a firm does not meet its required business mix during one of the last 5 years, SBA invokes a plan of remedial action for the next year, in which the firm is to report to SBA on its progress. Until the required mix is demonstrated, the firm will generally not be eligible for sole-source 8(a) contracts. Congress has provided tribal 8(a) firms with distinct advantages over other 8(a) businesses, in addition to the ability to receive sole-source 8(a) contracts for any amount. In some cases, there are also differences among the advantages provided to firms owned by ANCs, Indian tribes, and NHOs. Table 1 provides more details. For tribal 8(a) firms, SBA has specific oversight responsibility for accepting the firm into the program, which includes ensuring that the tribal entity owning the firm does not have more than one 8(a) firm in the same primary line of business, defined by a North American Industry Classification System (NAICS) code and annually reviewing 8(a) firms to track their progress in the 8(a) program, including their mix of 8(a) and non-8(a) revenue in the last 5 years in the program and any changes to the firms’ business targets, objectives, and goals. The procuring agency offers, and SBA may accept, a procuring agency’s requirement into the 8(a) program either as a competitive procurement— to be competed among all eligible 8(a) firms—or as a sole-source procurement. The agency’s offer letter must identify the requirement, any procurement history, the estimated dollar amount, and NAICS code, among other things. Before accepting a procurement as an 8(a) sole- source contract, SBA is to verify the proposed firm’s size status to ensure that it qualifies as small under the identified NAICS code. Once accepted into the program, 8(a) firms may pursue contracts in additional lines of work, called secondary NAICS codes. Once a requirement is awarded as an 8(a) contract, it must remain in the 8(a) program unless the procuring agency decides it would like to fulfill the follow-on requirement outside of the program and requests approval from SBA to do so. Procuring agency contracting officers also have responsibilities under the 8(a) program. For all of the agencies in our review, SBA has delegated responsibility for contract administration to the contracting officers through partnership agreements. These responsibilities include, for example, ensuring compliance with the limitations on subcontracting requirements under 8(a) contracts. SBA’s 8(a) program regulations allow for involvement of other businesses and non-disadvantaged individuals as a way of helping the 8(a) business grow and develop. For example, 8(a) firms can hire outside agents to assist them in obtaining 8(a) contracts; however, SBA’s revised regulations now prohibit agreements in which agents receive a percentage of the contract value as compensation for their assistance. In addition, large businesses can create a mentor-protégé joint venture with an 8(a) firm to win 8(a) prime contracts or can act as a subcontractor under an 8(a) contract. percentage of labor costs that 8(a) firms and non-8(a) partners may incur under service and construction contracts. For service contracts with subcontracting activity, the 8(a) firm must incur at least 50 percent of the personnel costs with its own employees (15 percent for construction contracts). Further, a non-disadvantaged individual can own up to a 49 percent interest in an 8(a) firm, retaining his or her percentage of ownership in the profits the firm generates. These arrangements with other businesses or non-disadvantaged individuals can result in a relatively small percentage of contract profits being retained by the tribal entity that owns the 8(a) firm. Figure 2 is an illustrative example of how these arrangements could occur when a tribal 8(a) firm forms a joint venture with a large business under the mentor-protégé program, as allowed by SBA regulations. SBA must approve the mentor/protégé agreement before the two businesses submit an offer as a joint venture to receive exclusion from affiliation. 13 C.F.R. §§ 124.513(b)(3) and 124.520(b) and (d)(1)(i). For 8(a) construction contracts, the prime contractor must incur at least 15 percent of the personnel costs with its own employees. Thus, if our illustrative example above were a construction contract, the tribe that owns a 51 percent interest in the firm would receive $31,000 in profit distribution. In 2006, we reported that ANCs used the 8(a) program as one of many sources of revenue (which could include revenue generated outside of government contracts) to provide benefits to their shareholders.found that there was no explicit link between the revenues ANCs generated from the 8(a) program and the benefits they provided to shareholders, because ANCs only tracked benefits generated from their consolidated revenue sources. The recently revised 8(a) regulations require tribal 8(a) firms to report annually to SBA on the benefits they are providing their members or community from their participation in the 8(a) program. From fiscal year 2005 through 2010, federal dollars obligated to tribal 8(a) firms grew from $2.1 billion to $5.5 billion. Obligations to 8(a) firms owned by ANCs—which represented the majority of these tribal obligations during each fiscal year—rose steadily, from $1.9 billion to $4.7 billion. Obligations to 8(a) firms owned by Indian tribes and NHOs also grew steadily during this time frame, to $690 million and $109 million, respectively, in fiscal year 2010. Total 8(a) obligations (to tribal and non- tribal 8(a) firms) increased from $11.3 billion to $18.8 billion during the six-year period. Obligations to tribal 8(a) firms represented a 160 percent increase over this time, while obligations to non-tribal 8(a) firms increased 45 percent. Figure 3 shows the growth of non-tribal and tribal 8(a) obligations. Tribal firms represented a very small percentage of all 8(a) firms, but accounted for almost 30 percent of 8(a) obligations—about $5.5 billion— in fiscal year 2010, as shown in table 2. The percentage of obligations under competitively awarded tribal 8(a) contracts increased from fiscal year 2005 to 2010. In fiscal year 2010, sole-source obligations to ANC 8(a) firms decreased slightly, for the first time since 2005. Table 3 compares the percentage of obligations under competitively awarded contracts for ANC, Indian tribe, and NHO 8(a) firms in fiscal year 2005 and 2010. Even with this increase in obligations under competitively awarded contracts to tribal 8(a) firms, sole-source contracts still accounted for at least 75 percent of all tribal 8(a) obligations annually. In terms of obligations, in fiscal year 2010, sole-source awards to tribal 8(a) firms accounted for $4.3 billion of the total $5.5 billion in tribal 8(a) obligations. Figure 4 shows the breakdown of sole-source and competitive 8(a) obligations for tribal and non-tribal 8(a) firms from fiscal years 2005 through 2010. As shown in figure 4, the percentage of competitive obligations for non- tribal 8(a) firms—about 45 percent in fiscal year 2010—still far outpaces those of tribal 8(a) firms. Further, our analysis of FPDS-NG obligation data for new sole source awards under the 8(a) program, issued from fiscal years 2005 through 2010, reveals that for both tribal and non-tribal 8(a) firms, obligations on sole-source 8(a) awards increased during the last month of each fiscal year.September of each year. Contracting officials at the agencies we reviewed, similar to what we reported in 2006, told us that using tribal firms under the 8(a) program allows them to award sole-source contracts for any value quickly, easily and legally. They further stated these awards help procuring agencies to meet their small business goals, but added that the program offices’ preference for using the same firms for follow-on contracts also plays a role.the methods used by contracting officials to determine price reasonableness in a sole-source environment. However in several cases we found that contracting officers were moving away from sole-source tribal 8(a) contracts toward competition. We also found examples where tribal 8(a) contracts that had previously been awarded on a sole-source basis were competed, resulting in savings. Contracting officials viewed sole-source contract awards to tribal 8(a) firms as a way to expedite the federal acquisition process, avoid some potential bid protests, and help them meet their agencies’ small business goals. Prior to SBA’s acceptance of any sole-source requirement into the 8(a) program, the procuring agency need only identify a qualified 8(a) firm and obtain approval from SBA to award a contract. It is the procuring agency’s responsibility to conduct market research, including determining whether offers can be obtained from two or more firms at fair market prices. However, SBA also considers market research requirements to be satisfied when a participant in the 8(a) program self-markets its abilities to a procuring agency and is subsequently offered a sole-source 8(a) requirement. Acquisition planning activities are intended to ensure that the government meets its needs in the most effective, economical, and timely manner possible. Some contracting officers awarded sole-source 8(a) contracts to tribal firms because this approach allowed them to avoid lengthy acquisition planning and market research procedures, thereby expediting For example, documentation in a Department the procurement process.of Homeland Security contract revealed that the contracting official awarded a $96 million sole-source contract to a tribal 8(a) firm because this was the most streamlined approach to obtain services. According to the contract file, the agency saved considerable time in the acquisition process and thereby ensured a timely award. In another example, one contracting officer told us that she sees many more sole-source contracts to tribal 8(a) firms at the end of the fiscal year, likely because of poor acquisition planning. Recalling a time when a program office needed to award a contract quickly during the fourth quarter of the fiscal year, she said she was able to award the contract on a sole-source basis to a tribal 8(a) firm within 2 weeks. She estimated that to award the contract competitively would have taken 60 to 90 days. The A-76 process is a federal government policy which subjects commercial activities to competition and requires agency officials to identify all activities performed by government personnel as either commercial or inherently governmental.OMB, Circular A-76 (Revised), Performance of Commercial Activities 4, (May 29, 2003). The provisions in these appropriations acts allowed DOD to avoid the A-76 process when contracting with tribal 8(a) firms. DOD used the authority in section 8(a) of the Small Business Act to make the sole source awards. See, for example, Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009, Pub. L. No. 110-329 § 8016, 122 Stat. 3623-24 (Sept. 30, 2008). government should be performing a function with its personnel or contracting those functions to private sector firms. DOD officials believed awarding the sole-source contracts was necessary, since concerns about the time frames to conduct the A-76 process—which officials at one base estimated could take up to 3 years—were causing the government employees performing the work to become increasingly concerned about their job security and to seek employment elsewhere. In some cases, as allowed, contracting officials awarded sole-source contracts to tribal 8(a) firms even though market research had revealed other firms capable of performing the work. For example, Army contracting officials’ request for information for medical services resulted in six firms’ submitting comments. However, citing the current contract’s expiration time frame, the contracting officer stated that a successful competition would require a great deal of acquisition planning and, as such, would likely result in a break in services. In another example, market research identified 93 potential contractors for a base engineering support requirement; several of which were known to possess the capabilities to handle the requirement if it were competitively solicited. In fact, the previous contract for this requirement had been competitively awarded in the 8(a) program, with seven offers received. Our review of the acquisition plan and discussion with the contracting officer revealed that the reason for awarding the sole-source follow-on contract was because of the significant delay in obtaining the statement of work from the program office. Because the requirement was critical and the new contractor would have to “hit the ground running,” a sole-source contract was awarded to a tribal 8(a) firm that had subcontracted with the incumbent contractor. At one Army Corps of Engineers location we visited, contracting officials told us that they put basic ordering agreements (BOA) in place—such as the one we reviewed for design and construction services with a tribal 8(a) firm—because BOAs can be quickly set up, sometimes in only a matter of hours. The regulations require that an agency offer, and SBA accept, each order under a BOA to the 8(a) program prior to award, because the BOA itself is not a contract. As part of this process, SBA would ensure that the tribal 8(a) firm still meets the size standard for the NAICS code for the requirement at the time the order is offered to SBA. However, we found that the contracting officer was not offering each order under this $10 million BOA (which had been awarded in June 2009) to SBA, in violation of FAR and SBA regulations. The DOD contracting official in this case sent notices of the orders to the SBA district office after the award. SBA district officials did not follow up to determine why these orders had not been offered prior to the award. By not offering each order under the BOA, there is a risk that a tribal 8(a) firm could outgrow the size standard and be improperly awarded a sole-source contract through the 8(a) program. In subsequent discussions, SBA and an Army Corps of Engineers legal representative confirmed that all orders under BOAs in the 8(a) program should be offered to SBA. According to the legal representative, the contracting office is no longer using BOAs to meet its requirements and is instead using indefinite quantity contracts. Tribal 8(a) sole-source contracts are also attractive because there are limitations on their ability to be protested. Although 8(a) sole-source awards have been protested, the following issues may not be challenged of any 8(a) participant by any party, either to SBA or any administrative forum as part of a bid or other contract protest: (1) the eligibility of the participant for a sole-source or competitive 8(a) requirement, (2) the NAICS code assigned to a sole-source 8(a) requirement, or (3) the size status of a nominated participant for a sole-source 8(a) procurement. According to contracting officials, bid protests can result in significant and costly delays and potentially disrupt critical services. Moreover, the officials stated that responding to bid protests absorbs their already limited time and resources. One tribal 8(a) company, in its marketing materials to the government, mentioned that one of the many benefits of a sole-source award to their company was that it would not be subject to a bid protest. As examples, 8(a) sole-source awards to tribal 8(a) firms have been protested in Mission Critical Solutions, B-401057, May 4, 2009 and JMX, Inc., B-402643, June 25, 2010. contracts we reviewed. After an 8(a) contract was awarded competitively to a tribal firm, the incumbent firm’s sister subsidiary, who had competed under the solicitation, protested the award. This sister subsidiary did not receive the award because its proposal relied on the past performance of its sister firm (the incumbent). According to the solicitation’s instructions, past performance of sister firms would not be considered as highly as the firm’s own past performance. Further, its offer was 86 percent higher than that of the winning tribal 8(a) contractor. As a result of the protest, the expiring contract was extended 5 months, resulting in over $800,000 in additional revenue for the incumbent firm. Another reason contracting officials gave for awarding sole source tribal 8(a) contracts is to help their agencies meet their small business prime contracting goals. Some tribal 8(a) firms also recognize that this is an attractive feature and promote it in their marketing materials. However, at one location we visited, agency officials told us that they chose to compete a follow-on procurement outside of the 8(a) program even though they knew it would significantly affect their ability to meet their small business goals. The previous 8(a) contractor had been awarded a 5-year, $250 million dollar contract. Obligations under this contract had helped the agency meet its small business goals. Nevertheless, agency officials, including the small business advocate, thought the potential to obtain a better price and service through full and open competition was more important at that time. Contracting officials we spoke with noted that some program officials prefer to continue working with specific tribal 8(a) firms, especially when program officials had established a working rapport with the incumbent contractors. Our prior work has shown that program officials generally have a preference for working with incumbent firms. Program officials play an important role in the contracting process—developing requirements, performing market research, and interfacing with contractors. For one Army contract we reviewed to provide direct healthcare services at military medical-treatment facilities, program officials had decided that a follow-on sole-source 8(a) contract award to one of the incumbent firm’s sister subsidiaries was the best option because there was a potential for risk if the procurement start date was not met. Additionally, in its proposal, the incumbent’s sister subsidiary highlighted the fact that SBA regulations permitted it to share senior management with the incumbent and that as a result, their services were provided under the same team. We found instances in which contracting officials awarded bridge or follow-on sole-source contracts to incumbent tribal 8(a) firms or to their sister subsidiaries for continuity. Forest Service contracting officials had a history of awarding sole- source bridge or follow-on contracts for similar requirements to the same incumbent 8(a) firm or one of its sister subsidiaries. The contracting official responsible for the three contracts in our sample explained that the program office pressured her to continue awarding to this particular firm because the program office believed that awarding the requirement to a new contractor would cause a disruption in services. One of these contracts—a $125-million sole- source 8(a) contract for computer hardware and enterprise software— was awarded to a firm’s sister subsidiary when the incumbent was no longer eligible to receive 8(a) contracts. An email from an official from the sister firm to the contracting officer, in suggesting that the new contract be awarded to the sister firm, told the agency that all incumbent personnel working on the contract, as well as equipment, would be transferred over and that essentially the agency “will see only a name change in the firm providing the service.” A contracting official at the Department of Energy told us that she awarded a sole-source contract for facility maintenance and support services to the sister subsidiary of a tribal 8(a) firm because the incumbent firm had graduated from the 8(a) program, thus making it ineligible for the follow-on contract. Further, she stated that this made the transition very easy to manage, since nearly 100 percent of the incumbent’s staff transferred directly to the sister firm. We also found examples where bridge contracts to tribal 8(a) firms were used to ensure continuity while competing the follow-on requirement; however, it was not always a smooth transition. In one case, a contracting officer at an Army acquisition activity awarded a one-year bridge contract to the incumbent tribal 8(a) firm to avoid unnecessary delays and provide sufficient time to compete the requirement in the future. The incumbent’s contract—awarded out of a different Army contracting office—had been terminated after one and a half years on the grounds that it was legally insufficient. However, when awarding the bridge contract, contracting officials learned that the incumbent contractor’s employees had assisted the Army in developing the follow-on requirement. The contracting officer had the contractor put in place a plan to mitigate this conflict of interest, but still awarded the sole-source bridge contract to meet the immediate need. In another case, Army officials tried to award a task order under an existing contract as a bridge contract to maintain the service while they competed the follow-on award. According to Army officials, the tribal 8(a) firm refused to negotiate, stating it would only agree to a 6-month bridge contract with three 1-year option periods. The contracting officer told us that they believed they were in a bind, agreed to the terms, and ended up exercising all 3 option years. The follow-on requirement is currently being competed. In March 2011, the FAR was revised to incorporate a new rule, pursuant to section 811 of the National Defense Authorization Act for Fiscal Year 2010, which requires a written justification for sole-source 8(a) contracts over $20 million. The justification must be approved by the appropriate officials—dictated by dollar thresholds—and be publicly posted within 14 days of award. This provision may have an impact on how quickly and easily sole-source tribal 8(a) contracts are awarded. The new justification must include, at a minimum a description of the needs of the agency that will be addressed by the contract, specification of the statutory provision allowing for the exception to competition, a determination that the use of a sole-source contract is in the best interest of the agency concerned, a determination that the anticipated cost of the contract will be fair and other matters the head of the agency would like included. While these requirements were not in effect for the contracts we reviewed, we discussed with the contracting officials the potential impact on future sole-source tribal 8(a) awards. Their opinions varied. Several officials stated that it will be more difficult to award sole-source contracts to tribal 8(a) firms, and in some cases these officials said they were pleased to have a tool to encourage program offices to increase competition. Others thought it would make no difference, stating that the justification is simply additional paperwork for the contract file. Still others stated that the new requirement will not affect them because their office had already moved away from awarding sole-source 8(a) contracts to tribally owned firms toward more competition. Some officials attributed this change in attitude in part to congressional and media attention on large dollar, sole-source awards to tribally owned firms. When awarding an 8(a) contract, contracting officers are required to determine that the overall price is a fair market price, which can be done through a cost or price analysis. The fair market price does not have to be the lowest price. However in a sole-source environment, there are increased concerns that the prices may not be the best for the government, as competition is the cornerstone of the acquisition system and a critical tool for achieving the best possible return on investment for taxpayers. These concerns would be no different under non-tribal 8(a) sole-source contracts. We found that contracting officials used various methods to determine price reasonableness of contractors’ proposed costs or prices. We also found examples where the follow-on requirements were subsequently competed and agency officials estimated savings. In finding a fair market price, contracting officers must first determine that the costs or prices proposed are fair and reasonable. According to the FAR, price analysis shall be used when certified cost or pricing data are not required. Price analysis is the process of examining and evaluating a proposed price without evaluating its separate cost elements and proposed profit. One of the preferred price analysis techniques is comparing proposed prices from more than one contractor in response to a competitive solicitation, as adequate price competition establishes a fair and reasonable price. The other preferred price analysis method is a comparison to historical pricing for the same or similar items. When using this method, however, the contracting officer must ensure that the pricing is a valid basis for comparison, such as ensuring that significant time has not lapsed between the prior acquisition and the present one. In addition, the prior price must be adjusted to account for materially differing terms and conditions, quantities, and market and economic factors. If contracting officers determine that these two techniques are unavailable or insufficient, they are encouraged to use other methods appropriate to the circumstances, such as comparison with competitive published price lists or independent government estimates. The FAR also states that cost analysis shall be used to evaluate the reasonableness of individual cost elements when certified cost or pricing data are required (however, price analysis is used to determine that the overall price offered is fair and reasonable). Cost analysis is the review and evaluation of any separate cost elements and profit or fee in an offeror’s or contractor’s proposal. Some cost analysis techniques include evaluating the government’s need for proposed cost elements, verifying labor rates, or comparing proposed costs to actual costs previously incurred by the same offeror. Cost analysis may also be used to determine cost reasonableness or cost realism when a fair and reasonable price cannot be determined through price analysis alone. For many of the sole-source contracts in our review, agency officials compared contractors’ proposed prices to the prices on the prior contract, U.S. General Services Administration (GSA) schedule prices, or pricing data from other sources. The following cases indicate the complexities of this price analysis technique. For example: A price analyst at the Social Security Administration found that a tribal 8(a) firm’s proposed prices for a $100-million sole-source contract were generally 5 to 192 percent higher than the prior, non-8(a) contractor’s prices. As a result, the price analyst recommended negotiating price reductions with the tribal firm. The contracting officer then performed an additional analysis of the same proposal and noted that the tribal firm’s proposed rates were 8 to 51 percent lower than the prior firm’s GSA schedule rates and were at or below the schedule rates of a subcontractor. The documented analysis noted that comparing the proposed rates to the incumbent contractor’s rates could be potentially misleading because performance problems also needed to be taken into account and that the incumbent contractor had not always provided qualified personnel, among other things. The tribal 8(a) firm’s proposed prices were accepted. For another Social Security Administration contract, the contracting officer evaluated the tribal 8(a) firm’s proposed prices for a sole- source, fixed-price contract based on pricing information from the current contract and noticed a significant increase in the tribal 8(a) firm’s price for installation and storage of the equipment being purchased. Upon further investigation, the contracting officer learned that the previous pricing had not accounted for substantial government delays that had added to the costs; the new proposal was attempting to appropriately include those costs. The contracting officer noted that the government would work to improve the inefficiencies that were causing this increase in cost, and based on these circumstances, the proposed higher price was determined fair and reasonable. Independent government cost estimates were also used to determine price reasonableness for the sole-source contracts in our review. The examples below illustrate some challenges faced when the estimates relied in part on outdated costs or inaccurate assumptions. For example: In one Army contract, the initial independent government estimate had to be revised from about $49 million to about $100 million, because it had not taken into account many different factors, such as travel and overtime for subcontractors. The contract was awarded for about $113 million. In another example at the Army, an independent government estimate was $2.7 million, compared to the contractor’s proposal of $4.7 million. The price negotiation memorandum noted that the government’s estimate was found to have several missing items, outdated estimates, and inaccurate assumptions. The estimate was used as the primary basis for conducting negotiations with the contractor and to determine that the contractor’s higher price was fair and reasonable. The contract was ultimately awarded for about $4.0 million. For one Army contract we reviewed, the contracting officer told us that she stopped using a competed, single-award indefinite quantity contract to a tribal 8(a) firm because the firm’s proposals for two of three task orders were significantly over the government estimates and the government officials did not believe they were getting a fair market price. In this case, the Army had simultaneously competed and awarded the base contract and the first task order. The contractor’s proposed price for the second task order, however, was almost $6 million, whereas the initial government estimate was just below $4 million. Contracting and program officials pushed back on the contractor’s proposed price, but the firm would not negotiate. The government ultimately awarded the second task order at the contractor’s proposed price. The contracting officer told us that the same thing happened on the third task order, so the Army officials canceled the procurement and stopped using that contract. Some officials told us that developing independent government estimates can be challenging, as the pricing environment can change and the estimate—which may be prepared 6 months to a year prior to contract award—can become outdated before negotiations begin. For example, when construction work is in high demand, prices for those services can increase over the course of a year, according to contracting officers. Other contracting officials told us that they question how independent the government estimates are when the tribal 8(a) incumbent works closely with the program staff who develop the estimates. For many of the sole-source awards we reviewed, contracting officials requested support from the Defense Contract Audit Agency (DCAA) to evaluate the reasonableness of the proposed costs. Some contracting officials effectively used this support to negotiate a lower overall price. For example, in our review of one DOD contract, DCAA submitted findings to the procuring agency 3 months prior to the award date, citing, among other things, $6.9 million in unsupported costs. Consequently, the contracting officials negotiated a 15 percent reduction of the proposed price, which amounted to a savings of nearly $9 million. In an Army contract, contracting officials agreed with the lower rates suggested by DCAA for certain cost categories and ultimately negotiated those rates with the tribal 8(a) firm, reducing the contract price by over $6 million. In another instance, agency officials awarded a sole-source contract to a tribal 8(a) firm quickly to ensure that critical services were maintained, but asked DCAA to audit the proposal with the understanding that the officials would further negotiate the costs after award based on the findings. DCAA’s assessment of the firm’s proposed costs was provided 2 months after the contract’s award. The audit questioned some of the contractor’s proposed costs, such as duplicative labor positions and staff positions that were vacant but for which salaries, wages, fringes, and retirement contributions were included in the final cost of the contract. The contracting officers faced challenges negotiating the contract’s price, as they were still negotiating some of these costs with the contractor nearly a year and a half after contract award. For sole source procurements, the government and contractor may use what is known as “alpha” procedures, where they work as a team during negotiations to define or refine requirements and come to agreement on prices. A number of contracting officers had used this method to work with a tribal 8(a) contractor to agree on a fair market price. One contracting official told us that negotiating prices face-to-face with the contractor using alpha procedures is easier and less time intensive, primarily because he can tell the 8(a) contractor how much funding he has to spend. From there, the contractor can explain to him what the government needs to “take off the table” and what items in the scope of work the contractor can provide at that price. Another contracting official told us that he used alpha procedures because the program office had failed to set all of the contract requirements prior to commencing negotiations with the tribal 8(a) contractor. Contracting officials at one location we visited noted that alpha contracting in a sole-source environment can lead to the best deal for the government for a variety of reasons, such as leveraging the insight of technical experts throughout the price negotiation process and providing a forum for the contractor to ask for additional clarification about the government’s requirements. At another location we visited, one contracting official told us that he believed the government got a better price using alpha procedures than by using full and open competition when contracting for construction of identical buildings. He attributed this in part to the fact that the contractor in the alpha process had a better understanding of the government requirements and the government did not have to go back and correct or make adjustments to the contract. savings. Recent policy and guidance from agencies, most significantly the Office of Management and Budget (OMB) and DOD, have also emphasized the importance of competition. With regard to ANCs, the Acting Deputy Assistant Secretary of the Army (Procurement) issued a memo in January 2011, stating that high-dollar sole-source awards to 8(a) ANC firms should be the exception rather than the rule, and laid out the expectation that these awards be scrutinized to ensure they are in the government’s best interest. Further, in November 2011, DOD’s Director of Defense Procurement and Acquisition Policy called for a review of all active sole-source contracts to ANCs that were awarded prior to the new requirement for a written justification for awards over $20 million. As part of the review, DOD services, agencies, and activities must review the justifications (if any) that support the contract awards and describe actions to ensure there is no abuse of these types of contracts. In the sole-source contracts we reviewed, we found examples where the follow-on requirements were subsequently competed, resulting in savings according to agency officials. The Air Force awarded a contract competitively for base operation support and, according to officials, saved about $17 million for a requirement that was valued at over $100 million. Officials stated that the previous contractor had high management costs. At the Army, we reviewed an approximately $8.9 million sole-source contract with a tribal 8(a) firm for one year of medical services. The contracting activity recompeted the follow-on requirement, and the contracting officer estimated savings of $2.3 million annually, for a total of $11.5 million over the life of the contract. At the Federal Emergency Management Agency, the contracting officer told us that when the follow-on to the sole-source tribal 8(a) contract in our review had been competed among small businesses, the labor rates on the new contract were, with one exception, between 5 and 46 percent lower than the previous sole-source contract. Department of Energy officials told us that they competed a requirement that was previously awarded sole source to a tribal 8(a) firm, and while they could not estimate dollar savings, they believed they were getting better performance as a result of the competition. To ensure that 8(a) firms do not pass along the benefits of their contracts to their subcontractors, regulations limit the amount of work that can be performed by subcontractors. Specifically, for service contracts with subcontracting activity, the 8(a) firm must incur at least 50 percent of the personnel costs with its own employees (for general construction contracts, the firm must incur at least 15 percent of the personnel costs). In 2006, we reported that procuring agency contracting officers were not monitoring compliance with the limitations on the percentage of work performed by subcontractors as required—largely because they were confused about whose responsibility it was to do so. Based on our recommendations, SBA took some actions to clarify this issue, including providing training to contracting officers and revising its partnership agreements with procuring agencies. Nevertheless, we have continued to find that monitoring of subcontracting limitations is not routinely occurring due to a lack of clarity as to who is responsible for the monitoring and uncertainty on the part of contracting officers about how to conduct the monitoring. Of the 87 contracts in our review, 71 had one or more subcontractors. We found no evidence of regular and systematic monitoring of the limitations on subcontracting. Some of these contracts had large dollar values, up to $500 million. When the subcontracting limitations are not being monitored, there is an increased risk that an inappropriate degree of the work is being done by large business subcontractors rather than the 8(a) firm. These risks can be significant given the large dollar value contracts awarded to tribal 8(a) firms. In response to our 2006 recommendations, SBA clarified in its partnership agreements with the procuring agencies that it is the contracting officer’s responsibility to monitor compliance with the limits on subcontracting under 8(a) contracts. In addition, SBA standardized language in its 8(a) acceptance letters to state that contracting officers are responsible for the monitoring. SBA also provided additional training and guidance for agency contracting officers about this responsibility, among other 8(a) contracting requirements. Even with these actions, however, we still found that some contracting officers do not understand that ensuring compliance with the limitations on subcontracting is their responsibility. Some stated that it was SBA’s responsibility as part of the annual review process for tribal 8(a) firms, and officials for one agency thought that it was ultimately the prime contractor’s responsibility. A contracting official from the State Department told us that he did not have the time or staff to monitor compliance, but he believed that the prime contractor self- monitored because the firm was hiring some subcontractor employees to work for it to ensure that the required work percentages were met. We found situations where there is an increased risk that the subcontractor may be performing more than the limitations allow. In some cases, these subcontractors were large firms or firms that had graduated from the 8(a) program, yet the government was not monitoring compliance with the limits on subcontracting. For example, in one case, the subcontractor to a tribal 8(a) firm under a base engineering support contract had held the prior contract for the requirement, and the subcontractor’s president had part-ownership in the tribal 8(a) firm. For another contract, to build an airplane hangar at an Air Force base—in which the percentage of work subcontracted was not monitored—the tribal 8(a) firm had subcontracted with a large business that had extensive experience in hangar building. During the negotiation process, after much discussion about the project, a government representative asked the tribal 8(a) firm what work it would be doing; up to that point the subcontractor had been answering all the questions. In another example, for construction of an aircraft facility at another Air Force base, the prime contractor stated in its proposal that it could not meet the 15 percent of work requirement, and thus a legal review initially found the pending award to be legally insufficient and unacceptable. The contracting specialist wrote a note on the legal memo stating that the prime contractor would meet the required work percentages, with no additional explanation. Notwithstanding the concerns raised, the contract was awarded. The contracting officer told us that she does not monitor the percentage of work that is subcontracted on this contract. Although contracting officers should consider all applicable regulations when awarding and administering 8(a) contracts, several contracting officers we spoke with told us they depend primarily on the requirements outlined in the FAR for guidance. The FAR only directs contracting officers to include the “limitations on subcontracting” clause—under which the prime contractor agrees to perform a certain percentage of the contract work itself in its 8(a) contract. The FAR does not state who is accountable for monitoring compliance with the required percentages. While the partnership agreements between SBA and the agencies clearly state that the procuring agencies are responsible for the monitoring, these agreements are signed by high level SBA and agency procurement officials; contracting officials may not be aware of the content of the agreements. Adding to the confusion over which agency is responsible for monitoring subcontracting, in reviewing 8(a) files in the SBA Alaska district office, we found examples where prime contractors had reported to SBA that they were complying with the limitations on subcontracting. However, SBA officials told us that they do not consistently collect this information from 8(a) firms and that it is ultimately the responsibility of the procuring agency to monitor compliance. Many contracting officials told us they do not know how to monitor the percentage of work that is subcontracted. Based on our review of agency contract files, data were not readily available, making it difficult to determine how much work was being performed by the prime contractor versus the subcontractor. For example, contractor invoices in some of the files we reviewed did not reflect the subcontracting activity. And for those invoices that did include subcontractor information, the separation between labor and materials costs was unclear. This information would be needed for contracting officers to properly monitor compliance with the limits on subcontracting, which excludes the costs of materials. Some contracting officials noted that the prime contractor itself would have ready access to the subcontracting percentages (such as in its financial systems). One contracting official noted that contractor invoices for time- and-materials services under a contract in our sample identified the subcontracted work, but the invoices for fixed-price services, billed under the same contract, did not. She estimated that it would take her several weeks to calculate the percentage of work that was subcontracted. A further complication pertains to monitoring subcontracting under indefinite quantity contracts, the government’s use of which is now outpacing stand-alone contracts. Of the 41 indefinite quantity contracts in our sample that had subcontractors, we found no evidence that the subcontracting limits were being routinely monitored. SBA regulations state that the 8(a) participant must demonstrate semi-annually whether it has incurred 50 percent of personnel costs with its own employees for the combined total of all task or delivery orders at the end of each 6-month period. However, the FAR does not cross-reference to this provision or otherwise describe how to monitor subcontract limitations in indefinite quantity contracts. Contracting officials told us they would appreciate additional guidance regarding methods they should employ to track compliance with the limits on subcontracting. The FAR is silent on this subject, and the SBA 8(a) regulation does not provide detailed instructions on how to do so. In the absence of specific guidance, some of the contracting officers we spoke with pointed to techniques that they have used to try to gauge the amount of work that is subcontracted. For example, one official said that he monitored subcontractors by “walking the ground,” so he can easily sight- check contractor badges to determine who is a prime contractor and who is a subcontractor. In another scenario, officials stated that they visit the worksite to check the company names on the trucks parked there. Others relied on their personal knowledge of the contractors, stating that because they were very familiar with the prime contractor, they would know if the firm was not performing its required percentage of the work. And still others tallied the number of workers employed by the prime versus subcontractor to get a general picture of the amount subcontracted, but did not calculate the percentage of labor costs associated with the subcontractors. While these actions are ways to get a general sense of subcontracting activity, they are not adequate to determine the extent of personnel costs that are incurred by the contractor. Many contracting officials also told us they reviewed contractor proposals to verify that the prime contractor planned to perform the required percentage of the work. However, this level of review alone does not ensure compliance with the limitations on subcontracting clause because subcontractors, and the amount of work they do, can change once the contract is awarded. In addition, contracting officers may not even be aware that work is being subcontracted. The tribal 8(a) contractor’s proposal for one contract we reviewed, for example, noted as a benefit to the government that the contractor’s own employees would be indistinguishable from those of its subcontractor. We also found cases in our review where contracting officials inadvertently learned that their prime contractors were using subcontractors. One Department of Agriculture official told us that he did not realize certain positions were going to be subcontracted until he questioned a particular wage rate during the negotiation process, and the firm stated that it needed to seek additional information from the subcontractor. In another instance, a contracting official at the Department of Justice was unaware that the prime contractor had subcontracted work until we brought it to her attention based on our review of the contract file. She added that often “the contracting officer is the last to know” about the prime contractor’s hiring subcontractors, because of a lack of communication among the contractor, program office, and contracting office. In yet another example, a DOD contracting officer said that he only learned that work was being subcontracted when a subcontractor employee was caught speeding on the base. During our review of contract files, we found a few instances where the file included a recently performed analysis of subcontracting percentages that appeared to have been prepared in anticipation of our visit. In one case, at the Centers for Medicare and Medicaid Services, an analysis, which the contracting officer said was prepared for our visit, showed that the prime contractor had subcontracted almost 72 percent of the total costs for the 3-month base period of a 5-year, almost $205 million contract, but a few years into the contract the overall subcontracting level had dropped to about 40 percent. The contracting officer explained that he knew that the 8(a) firm would have to initially subcontract out a substantial portion of the work, but the expectation was that the contractor would meet the required work percentage over the course of the period of performance. In another case, a recent analysis of the subcontracting percentages for several 6-month periods on an Army indefinite quantity contract showed that the prime contractor was performing the required percentage of the work, but when we asked the contracting officer about the analysis, he was not sure who had completed it or the basis for the figures. In another example, a document in a Food and Drug Administration contract file showed the prime contractor was performing the required percentage of the work. The contracting officer said that, in preparation for our visit, she had requested the analysis from the contractor, as she did not have the information to do it herself. She told us that she requests this information periodically from the vendor; however, there was no record of these periodic analyses in the contract file. We also talked to contracting officials who told us that they requested regular reports from the contractor on the amount of work subcontracted, but when we asked for examples of the reports, none could be provided, and there were no examples of these reports in the contract files. SBA made the first significant revision to 8(a) program regulations, effective March 14, 2011, in over 10 years, aimed at clarifying program rules, correcting misinterpretations, and addressing program issues. The revised rules include new requirements that will affect tribal firm participation in the 8(a) program, such as rules related to sole-source follow-on contracts and work performed by joint ventures. However, SBA will have difficulty enforcing some of these new regulations given the information currently available. Further, SBA, in its regulations or elsewhere, has still not addressed some issues we raised in our 2006 report. Finally, in this review we discuss practices that highlight how some tribal 8(a) firms operate, in effect, like large businesses due to their parent corporation’s backing and relationships with their sister subsidiaries. SBA has not reviewed these practices to determine whether they are acceptable given the business development purpose of the 8(a) program. Although the recent SBA rule changes are intended, in part, to address tribally owned firms’ participation in the 8(a) program, SBA does not have critical data it needs to implement or enforce compliance with some of the new requirements. These include new restrictions on agencies’ ability to award sole-source follow-on contracts to firms under the same tribal entity and restrictions on work performed by the non-8(a) partner in a joint venture. SBA headquarters officials told us they are currently in the initial stages of developing the requirements for a new system intended to provide necessary data on 8(a) firms, and estimate that it will be operational between September 2012 and January 2013. They are also in the process of re-writing their Standard Operating Procedures for district officials to implement the new regulations; however, they could not estimate at this time when the final version will be completed. In 2006, we reported that ANC 8(a) firms were taking advantage of their ability to create new subsidiaries to win follow-on work from subsidiaries that had left the 8(a) program. One of the new SBA rules prohibits the award of successive follow-on sole-source 8(a) contracts to multiple firms owned by the same tribal entity. Specifically, agencies are now prohibited from awarding a follow-on 8(a) sole-source contract to another subsidiary firm owned by the same entity—also called a sister subsidiary. In its explanation of this new provision, SBA stated that having one subsidiary take over work previously performed by a sister subsidiary does not advance the business development of two distinct firms. SBA expects that, when it accepts multiple firms under the same tribal entity into the 8(a) program, each firm will operate and grow independently in line with the business development purposes of the program. SBA’s intention was to address a negative perception that businesses could operate in the 8(a) program in perpetuity by changing their structure or form to continue to perform work as they had under previous contracts. As an example of this perception, we found that one tribal 8(a) firm stated in its marketing materials that it would “never graduate” from the 8(a) program. Agency officials told us that it is their general impression that by awarding follow- on contracts to the incumbent firm’s sister subsidiary, they are, for all intents and purposes, working with the same company. In our current review, we found multiple examples of follow-on sole- source 8(a) contracts being awarded to a sister subsidiary. While these contracts had all been awarded prior to the effective date of the new rule, these examples suggest that it is not unusual for agencies to turn to sister subsidiaries for follow-on sole-source 8(a) contracts. For example: When we spoke to one contracting officer’s representative with the Army in May 2011, we found that he was unaware of the new regulation. He explained that when a tribal firm graduates from the 8(a) program, his office would typically award a sole-source follow-on contract to one of the firm’s sister subsidiaries based upon the past performance of the incumbent. Noting that a current sole-source tribal 8(a) contract to provide research and development support was set to expire in 2012 and that the incumbent was graduating from the 8(a) program, he stated that it made sense to award the follow-on to one of the firm’s sister subsidiaries, especially since the incumbent had performed well. After we told him this was no longer permissible under SBA regulations, he said that the new rule put a “kink” in his plans and that he would need to start planning right away to ensure there was adequate time to successfully award the requirement competitively. In another example from our review, a procuring agency had awarded a follow-on contract to a sister subsidiary without realizing the relationship between the firms. In September 2007, the Social Security Administration awarded a $48 million 8(a) sole- source follow-on contract for information technology support services. The incumbent 8(a) ANC firm recommended that the agency make the award to its protégé, as the incumbent was no longer in the 8(a) program. When we spoke to agency officials in July 2011, the agency was not aware that the protégé firm, which received the follow-on award, was also a sister subsidiary of the incumbent. According to the officials, they will rely on SBA to know if a firm targeted for a follow- on procurement is eligible for an 8(a) award based on the new rules. Although prohibiting this practice of awarding sole-source follow-on contracts to sister subsidiaries of 8(a) firms is a positive step toward curbing some perceived abuses of the 8(a) program, the required information is not always available to enforce this new rule. For example, SBA’s data system for tracking 8(a) participants does not provide district offices with the full information needed to track compliance. District officials have access only to information on the firms that they service. Yet a number of tribal entities have firms in multiple locations throughout the country, and those firms are serviced by different SBA district offices. To illustrate, one ANC parent company has eight subsidiaries serviced in six different district offices. Because SBA’s Alaska district office services the majority of ANC 8(a) firms, its insight into the activities of those parent corporations’ subsidiaries may be greater than that of other SBA districts, which may service only one of several subsidiaries under the same parent corporation. When we visited the Alaska district office 7 weeks after the new rule had taken effect, we found evidence that SBA had turned down a follow-on contract offer from a procuring agency because the contract violated the new regulation; district officials informed us that they had declined four to five other contract offers for the same reason. However, the officials explained that they maintain paper files and that they would have limited procurement history information—including information about the prior, incumbent firm—unless the requirement had always been serviced by that district. Conversely, officials at an SBA district office that services relatively few tribal 8(a) firms told us that they have not turned down any offer letters that violate this new regulation, but that they also would not necessarily know if the incumbent was a sister firm given the information they can access in the 8(a) tracking system. SBA headquarters officials are aware of the limitations of the data system and told us that they are currently developing a new system that is intended to provide a more global view of tribal entities to the officials at all district offices. They are also considering ways to access more information on a contract’s procurement history, including linking their new system to FPDS-NG to obtain more information on 8(a) contract awards. Officials reported that, as of September 2011, they were in the process of awarding a contract to develop the system; they estimated it would be operational between September 2012 and January 2013. Further, SBA regulations require procuring agencies to discuss the requirement’s acquisition history, if any, in their 8(a) offer letter and information on any small business contractors which had performed the requirement in the past 2 years. In some cases, however, we found that contracting officers did not include the complete procurement history in their offer letters to SBA even when the requirement had been performed by a prior 8(a) contractor. For one contract we reviewed, the contracting officer had provided no acquisition history in its offer letter to SBA even though he told us that the requirement had previously been performed by the same contractor under another contract awarded by a different agency. He explained that he did not provide the procurement history to SBA because his office had no acquisition history with the requirement. In another example, the contracting officer told SBA there was no acquisition history for the procurement; however, documents in the contract file showed that the agency clearly considered it a follow-on requirement. The contracting officer could not recall why no acquisition history was included in the SBA offer letter, but noted that the scope of work had significantly expanded. SBA district officials also told us that they do not always receive complete procurement history information. In some cases, this is because agency contracting officials are unaware of the full procurement history, which can be a result of contracting officer turnover. Without access to a complete and accurate procurement history, SBA district offices will have difficulty enforcing this new regulation. Non-8(a) businesses can create a mentor-protégé joint venture with an 8(a) firm to win 8(a) prime contracts. In 2006, we reported that there was a risk that large businesses could take advantage of the 8(a) status of firms for their own benefit and that SBA may not obtain the information necessary to determine if the partnership is working as intended. SBA officials told us that they had seen cases where the non-8(a) partner in a joint venture was performing the vast majority—80 to 90 percent—of work on a contract. SBA’s new rules require that the 8(a) partner in certain kinds of joint ventures perform a specific portion of the work. Application of this new rule depends on whether the joint venture is populated (i.e., it is a separate legal entity that has its own employees) or unpopulated (i.e., it merely exists through a written agreement and would use the employees of the 8(a) and non-8(a) partners). The new regulation specifies that the 8(a) partner in (1) an unpopulated joint venture or (2) a populated joint venture with one or more administrative personnel, must perform at least 40 percent of the work performed by the joint venture. The previous regulations simply stated that the 8(a) partner must perform a “significant portion” of the contract. SBA officials believe that this is an improvement because it gives an exact measure of how much work should be done by the 8(a) partner, to better ensure that the firm receives significant benefit from the venture. However, the agency does not have the information necessary to implement this new requirement. SBA relies on information from 8(a) firms on their joint venture agreements, but SBA officials told us that they do not always get the information they need to determine how the work would be performed. For example, one joint venture mentor-protégé agreement we reviewed— approved by SBA but formed prior to the new rule—stated that the 8(a) firm would have full responsibility in overseeing performance of any contract awarded to the venture. It further stated that the 8(a) partner would perform at least 51 percent of the work for the contract, but did not provide any details on how the work would be divided. Questions were subsequently raised about this joint venture. In 2008, DCAA—at the request of Army officials who had concerns about the amount of work the tribal 8(a) firm in this joint venture would perform—found that there was not enough financial information available to perform an assessment of either the joint venture or tribal 8(a) firm. DCAA noted, however, that the 8(a) firm had only one employee and that a majority of its work had been subcontracted.firm said the agency generally receives an annual statement that a firm is complying with joint venture requirements, but does not receive further information on how the work is split between the 8(a) and non-8(a) partner. SBA officials acknowledge having little insight into how joint An SBA official from the district office overseeing the venture partners share the work, making it difficult to enforce new regulations. The new rule also requires that, for populated joint ventures, the non-8(a) firm and its affiliates cannot receive subcontracts at any level—first tier or below—under a joint venture 8(a) contract. would be violated if a joint venture subcontractor further subcontracted work to a firm that was an affiliate of the non-8(a) partner. Thus, enforcing this rule requires knowledge of all subcontractors at all levels, as well as the ability to identify whether any of the subcontractors are affiliated with the non-8(a) partner. Given SBA’s limited insight into subcontracting on 8(a) contracts, this new regulation will be hard to enforce. SBA officials state that they do not see information on planned subcontractors, noting that this information may be included in the contract proposal, which they currently do not review. They also acknowledged that a significant amount of research would be required to uncover any relationship between the non-8(a) firm and all levels of its subcontractors and affiliates. According to SBA headquarters officials, district officials could request contract proposals that would include more information on the planned subcontractors. However, SBA officials do not receive information on changes to the planned subcontractors after contract award. While the new regulations address certain issues pertaining to the primary and secondary lines of business under which 8(a) firms can operate, the rules’ impact on tribal firms, given their special advantages in the program, are not clear. Specifically, SBA has not addressed, in regulation or otherwise, issues we raised in our 2006 report regarding (1) the need for SBA to track the various industries under which multiple 8(a) subsidiaries of one tribal organization are generating revenue and (2) The final rule provides an exception to this rule if SBA determines that other potential subcontractors are not available, or the joint venture is populated only with administrative personnel. SBA’s statutory requirement to determine if firms in a tribal organization will obtain a substantial unfair competitive advantage in an industry. In 2006, we reported that SBA was not tracking the business industries in which ANC subsidiaries won 8(a) contracts under secondary NAICS codes. Thus, SBA was not ensuring that a firm’s secondary NAICS codes did not, in effect, become the primary business line under which the firm generated the majority of its revenue. Prior to the recent regulatory changes, if an 8(a) firm outgrew its primary NAICS code, it could still operate in the program and be awarded contracts under one or more of its secondary NAICS codes, as long as it qualified as small for these secondary codes. The new regulations now state that, when an 8(a) participant outgrows the size standard for its primary NAICS code, SBA considers that firm to have met its goals in the program, and SBA may graduate the firm prior to the expiration of its program term. Although this change may shorten the length of time that a tribal 8(a) firm is in the program, its impact is not clear because tribal entities can simply create a new subsidiary with a different stated primary industry, and the subsidiary can continue to work in any industry under secondary NAICS codes. Conversely, non-tribal 8(a) firms can only own one 8(a) firm in a lifetime. A second regulatory change allows 8(a) participants to change their primary NAICS code if they can show that they have been performing work in a different industry. Previously, the primary NAICS code identified at the time of application was in effect through the firm’s tenure in the 8(a) program. For tribal 8(a) firms, this new requirement means that, if they are outgrowing the size standards for their initial primary NAICS code, they can change to a secondary code with larger size standards to stay in the program (as long as it is not the same primary code as a sister subsidiary). However, SBA officials have said that firms will have to show that they are moving into a new industry through a thoughtful process and that outgrowing the size standard cannot be the only reason for changing their industry as this would not be in the spirit of the 8(a) program. At the same time, 8(a) firms are allowed to pursue multiple, diverse lines of business in an unlimited number of secondary NAICS codes. We found that one tribal firm in the 8(a) program had 49 declared NAICS codes, including industrial building construction, investigation services, and religious organizations. Another firm reported 25 different NAICS codes under which it may pursue work, including computer and software stores, advertising agencies, and educational support services. While the regulatory changes are a step in the right direction in enforcing and enhancing the business development aspects of the program, SBA has not taken steps to address a key finding and recommendation from our 2006 report pertaining to tracking secondary lines of business of 8(a) firms under the same ANC. We reported that SBA was not tracking revenue generated under these firms’ secondary lines of business. Thus, SBA was not ensuring that a firm’s secondary NAICS codes did not, in effect, become the primary business line by generating the majority of revenue. This situation could allow for a tribal organization to have more than one 8(a) subsidiary perform most of its work under the same primary NAICS code, which SBA regulation does not allow. We recommended that SBA collect data on primary revenue generators for 8(a) ANC firms to ensure that multiple subsidiaries under one parent company were not generating their revenue in the same industry. SBA systems that track 8(a) participant data do not collect information on the industries in which firms generate their income. In fact, in reviewing annual reports that tribal 8(a) firms had submitted to SBA, we found cases where multiple 8(a) firms under the same tribal entity reported generating most of their revenue in the same industry. For example, SBA records showed that six 8(a) firms under one ANC parent entity generated most of their 2009 revenue in the same lines of business, although each firm has declared a unique primary industry. Table 4 shows the declared primary revenue and actual main revenue generators for 2009 for each subsidiary. SBA has not addressed another recommendation we made in 2006 as to how it will comply with an existing law requiring the Administrator to determine whether and when one or more ANC firms are obtaining, or are likely to obtain, a substantial unfair competitive advantage in an industry. “Substantial unfair competitive advantage” is not clearly defined in statute or regulation. We found that the SBA Administrator has never made this determination, nor is there a process in place to do so. Making such a determination would result in all the subsidiaries under a tribal entity being considered affiliated and thus no longer considered independent for size purposes. A finding of affiliation with the parent organization or a sister 8(a) firm could result in a tribal 8(a) firm exceeding small business size standards and not being eligible for 8(a) contracts. In our current review, we found a few cases where the SBA district office had made such an affiliation determination between tribal 8(a) firms and related non-8(a) firms. In one case, SBA’s Alaska district office found affiliation between a tribal 8(a) firm and its part-owner, a business that had previously graduated from the 8(a) program. In another complex situation, a tribal 8(a) firm was 40 percent owned by a large business that was also a subcontractor to one of the 8(a) firm’s sister subsidiaries. SBA eventually determined that there was affiliation between the large business and the sister subsidiary, resulting in the two firms’ revenues being considered together for the sister firm’s size standard determination. As a result, SBA rejected a contract offer for the sister subsidiary where the large business would be a subcontractor, because the firm could not meet the size standards when its revenues were jointly considered with those of the large business. Figure 6 illustrates the relationships between the parent entity, the 8(a) firm and its sister subsidiary, and the large business. Because SBA has not taken steps to more rigorously determine how to ascertain substantial unfair competitive advantage, there is a risk that tribal 8(a) firms are being considered independent for size determinations when they should be considered affiliated. SBA officials told us that they are in the early stages of drafting a policy that will outline the process for making determinations of unfair competitive advantage. In 2006, SBA officials told us that the charter of ANCs under ANCSA— economic development for Alaska natives from a community standpoint— can be in conflict with the business development intent of the 8(a) program. We pointed out ways that ANCs use the 8(a) program differently than individually owned 8(a) businesses do. Congress has stated that the 8(a) program purpose is exclusively for business development purposes to help small businesses owned and controlled by the socially and economically disadvantaged to compete on an equal basis in the mainstream of the American economy. SBA, in changing its rules to disallow the award of follow-on contracts to tribal 8(a) sister subsidiaries, stated that it expects that two or more firms under the same tribal organization are to operate and grow independently, in line with the business development purposes of the 8(a) program. However, we found other practices, not addressed under the regulations, that highlight the particular nature of tribal 8(a) firms’ interconnectedness. These practices result in some firms essentially operating like large businesses and not developing as independent 8(a) firms. For example, the tribal firms often have common management and subcontract with each other or otherwise draw resources from one another or from the parent corporation. Access to these additional resources can help promote their significant business growth over a short period of time, sometimes resulting in firms leaving the 8(a) program early after outgrowing their size standards. By not participating in the transition phase of the program, these firms are missing out on some of the business development aspects of the program, such as competing for non-8(a) contracts to demonstrate their progress in developing into viable businesses that are not solely reliant on the 8(a) program. SBA headquarters officials recognize that tribal 8(a) firms have some advantages over other 8(a) firms because of the resources they can draw from their parent organization and sister firms. But SBA has not determined whether these other practices we identified are congruent with the business development purpose of the 8(a) program. SBA officials look at individual firms during annual reviews, but do not consider the consequences of their interconnectedness with sister subsidiaries and the parent company in the areas discussed below. One way firms under tribal organizations are generally interconnected is through common management. Common management was evident in many of the tribal 8(a) firms’ applications we reviewed. For example, the manager of one 8(a) firm also served as Chief Executive Officer to three sister subsidiaries under the same parent, including a sister subsidiary that provides administrative support services to the “family of companies.”tribal 8(a) firms’ ability to show potential for success in the 8(a) program. SBA requires applicants to show potential for success by having at least 2 years of experience in their primary industry or by showing that their managers have technical and management experience in that industry, among other things. Of the 62 tribal 8(a) firms we reviewed, 44 entered the 8(a) program with less than 2 years of experience in their primary industry. Most of the firms demonstrated potential for success by showing corporate managers’ significant experience in the stated primary industry through work with a sister subsidiary. For example, in considering an applicant that was applying to the 8(a) program just 6 months after it was organized, SBA pointed to the extensive managerial and technical experience of the firm’s president, including his previous position as vice president to a sister subsidiary. Further, the interconnectivity of some tribal 8(a) firms is also evident where the same board members oversee multiple firms under their parent entity. For example, we found that a member of the board of directors had served on the board of three different 8(a) subsidiaries, while also serving as a member of the board of directors for the parent entity. Another way tribal 8(a) firms become interconnected is through subcontracts with their own sister subsidiaries. During negotiations with the Army for an 8(a) contract, one tribal firm noted its ability to quickly subcontract with its sister firms as a benefit. We found that some tribal firms demonstrated their potential for success when they did not have 2 years in business, using these subcontracts as a record of successful performance in their primary industry. Of the 44 firms we reviewed that entered the program with less than 2 years of experience in their industry, we identified 20 that had obtained some initial experience through subcontracts with a sister subsidiary. For example, we reviewed seven firms owned by one Indian tribe, and five of those seven firms used subcontracts from sister firms to demonstrate their ability to successfully perform work in their primary industry. As another example of firms’ interconnectedness, we found that tribal 8(a) firms can leverage subcontracts from sister subsidiaries to generate required percentages of non-8(a) revenue as the firm progresses in the 8(a) program. During the last 5 years in the program, known as the transition period, firms are required to obtain a certain percentage of non- 8(a) revenue to demonstrate their progress in developing a viable business that is not solely reliant on the 8(a) program. In one example, a firm did not meet its non-8(a) revenue requirements in its seventh year in the program. Consequently, the SBA district office placed the firm under remedial action, wherein it was ineligible to receive sole-source 8(a) contracts. However, SBA reinstated the firm after a sister subsidiary awarded it a $20 million subcontract that boosted its non-8(a) revenue to the required annual level. The firm then regained its eligibility to receive sole-source 8(a) contracts. Tribal 8(a) firms may also cite the past performance of sister firms to demonstrate their own capability to perform under an 8(a) contract. In our review of contract files, we found a number of examples where firms pointed to the past performance of sister subsidiaries in their proposals to demonstrate their capability. One firm, in its business plan presented to SBA, pointed out that leveraging the past performance of a sister company was extremely important as a basis for demonstrating capability to perform. The firm noted that during its first 2 months of operation, it was often asked to provide past performance documentation and that “this is a requirement that is obviously difficult to meet given that we are a brand new company that has only been just recently certified and approved to begin accepting contracts.” Another firm pointed out that it had an advantage over competitors because of the history of successful contract performance by sister subsidiaries. Some tribal 8(a) firms promote the fact that they are part of a larger corporate brand and can access resources from their parent organization and sister firms. Even though tribal 8(a) firms must be “small” under the SBA size standard for their primary industry, their ability to leverage these additional resources can vastly increase the breadth and depth of their capabilities. As the following examples show, the firms can operate, in effect, more like large businesses. One ANC 8(a) firm reported to SBA that it is without “geographical limitations as the ANC presence has been established in 49 states. will continue to work with its existing customer base as well as network with agencies familiar with the ANC name.” One ANC firm reported the intention to transfer staff and management from other subsidiaries as workloads dictate, “with reach back capabilities to access 6,700 employees nationwide and the means of accessing many in-house subject matter experts when necessary.” In another contract, the same firm advertised to a procuring agency that its resources included over 7,000 employees at over 90 locations in 31 states to support construction projects. For a different ANC 8(a) firm, procuring agency officials noted that the firm had 4,000 employees it could draw from to perform the contract. One firm owned by an Indian tribe, in describing its prior experience, advertised in its proposal to the Army the overall success of firms under the parent entity in providing services to the federal government and managing contract employees. The firm also stated in the proposal that its performance on the contract would be at the same high level as its successful sister firm that had graduated from the 8(a) program, as the firms share the same senior management. One ANC has a marketing and proposal services center that is dedicated to supporting all of its subsidiaries in developing cost and technical proposals for government contracts. This ANC also designated an employee to act as the sole point of contact to the SBA for all correspondence and filings for seven of its 8(a) subsidiaries. An ANC firm stated in its business plan that a benefit of its organizational structure is the ability to operate as a small company while having access to corporate backing “that typically only a large, seasoned company can provide.” Another firm—in its capabilities briefing to a procuring agency—advertised that while the firm is an 8(a) small business, it operates within a resource environment of a large business. In its business plan to SBA, an ANC 8(a) firm listed some large businesses as primary competitors in its market, including Lockheed Martin, Northrop Grumman, CACI and General Dynamics. Access to these additional resources, plus the special advantages afforded tribal 8(a) firms, can help promote their significant business growth in the 8(a) program over a short period of time. For example, one tribal 8(a) firm reported average revenues of $31,000 from landscaping contracts when entering the program in 2009. Subsequently, the firm received a $500 million contract for construction. In 2011, the firm reported sales of $21.3 million, an increase of 764 percent from the previous year. In another example, a firm had one employee when it applied to the 8(a) program, but had grown to 124 employees by its first annual review by SBA. Many tribal 8(a) firms have left the program prior to completing the full 9 year term. Table 5 shows that of the 165 tribal 8(a) firms that have left the program, 70 left prior to completing the full 9 year term. Furthermore, more ANC firms withdrew or graduated from the program early than completed the 9 year term. For some tribal 8(a) firms, their rapid growth prevents them from reaching the transition phase of the 8(a) program because they have outgrown the small business size standards. The small business regulation states that to ensure participants do not develop an unreasonable reliance on 8(a) awards and to ease their transition into the competitive marketplace after graduating from the 8(a) program, participants must make maximum efforts to obtain business outside the 8(a) program. As a result of withdrawing from the program early, these firms never have to compete for contract awards and thus do not experience some of the intended business development aspects of the 8(a) program. For example: In its review of an ANC firm’s third year in the 8(a) program, SBA found that the firm had average annual revenue of $78.4 million, which exceeded its small business size standards. Furthermore, SBA pointed out that the firm likely would not meet its targets for non-8(a) revenue once it reached the transition phase and recommended early graduation from the program as a result of these factors. During the firm’s time in the program, 99 percent of its revenue came from 8(a) contracts. SBA stated in its analysis of another ANC firm’s 8(a) application that rapid growth could be a weakness, as subsidiaries under the firm’s parent entity tended to grow too large to continue in the 8(a) program after just 4 to 5 years. This firm had reported $318 million in revenue from 8(a) contracts in its third year in the program, and SBA recommended that the firm be graduated early from the program as it was no longer a small business. However, the firm remained in the program for one more year. In another example, an ANC firm voluntarily withdrew from the 8(a) program after almost 4 years. In commenting to SBA about its experience, the firm suggested that SBA should increase size standards for industries because of the size of large government contracts that tribal firms win. For tribal 8(a) firms that do continue to the transition phase, some have difficulty meeting non-8(a) revenue requirements because they were awarded large 8(a) sole-source contracts in their early years in the program. In one example, a tribal firm reported to SBA that large 8(a) sole-source contracts were taking up a lot of its existing labor pool, not allowing it to seek non-8(a) contract opportunities. Another firm did not meet its non-8(a) revenue requirements in the transition years, and SBA district officials eventually recommended that this firm voluntarily withdraw as officials believed the firm had not complied with the spirit of the 8(a) program. When a firm does not meet its non-8(a) revenue requirements, it is generally prohibited from receiving further sole-source contracts. However, we found that in 2009, SBA accepted an offer from the Army for a $45 million sole-source award on behalf of a firm that had not met its non-8(a) revenue requirements. SBA district officials thought they may have accepted the offer on behalf of the firm because of severe financial hardship, but they could not locate the file to determine the exact reason. It has been more than 20 years since Congress began granting tribal firms special advantages under the 8(a) program. The steady growth in government obligations to these firms, largely through sole-source contracts, draws attention to policies that are designed to promote small businesses and the need to spend taxpayer dollars wisely. SBA has taken some steps, based on our earlier recommendations, to clarify program rules, including the need for monitoring the limitations on subcontracting. However, contracting officers generally are not performing the monitoring—often because of confusion about how to go about doing so and a lack of clarity in existing regulations, particularly with respect to indefinite quantity contracts. Not monitoring the limitations on subcontracting can pose a major risk that an improper amount of work is being done by large business subcontractors under large-dollar value, sole-source contracts to tribal 8(a) firms. Tribal firms, because of their special advantages in the 8(a) program, can operate under more complex contracts and business relationships than typical 8(a) firms, making oversight difficult. SBA’s recent revisions to the 8(a) regulations are intended to address several issues we had raised in the past regarding improved oversight of ANC 8(a) contracting that also apply to all tribal 8(a) firms. However, SBA does not have a way to track the information it needs and lacks clear procedures to deter certain prohibitions addressed in the regulations—for example, sister subsidiaries winning follow-on sole-source contracts and joint-venture partners unduly benefiting from their 8(a) partners’ contracts by performing most of the work or improperly subcontracting to an affiliate. The new 8(a) tracking database, which is in the initial stages of development, could, if structured to capture key information, better position SBA to implement these new regulations and to address issues we identified, such as tracking revenues from tribal 8(a) firms’ primary and secondary industries. Further, when agencies do not provide the full acquisition history in offer letters, SBA may not have the necessary information to enforce the new regulations. Finally, while SBA officials recently told us they are in the early stages of drafting a policy that will outline a process for determining unfair competitive advantage, SBA still has not addressed in its regulations the process for implementing the statutory requirement to determine whether substantial unfair competitive advantage exists for one or more tribal 8(a) firms. Finally, some tribal 8(a) firms effectively operate as large firms in a small business program. The practices we have identified, such as capitalizing on corporate resources to promote business and using sister subsidiaries for subcontracting and past performance, are currently allowed, even under SBA’s revised regulations. However, it is within SBA’s purview as the agency statutorily authorized for the 8(a) program to determine if these practices are congruent with the purpose of the 8(a) program— which is to develop sustainable, small, disadvantaged businesses in the U.S. economy. To improve oversight of the limitations on subcontracting clause and to clarify who has responsibility for monitoring compliance with the clause, we recommend the Administrator of the Office of Federal Procurement Policy, in consultation with the Administrator of SBA, take the following two actions: 1. Provide specific guidance (including data collection options) to agency officials, including to contracting officers, about how to monitor the extent of subcontracting under 8(a) contracts, including for orders under indefinite quantity contracts. 2. Take actions to amend the FAR to (1) direct contracting officers at agencies that have been delegated responsibility for ensuring compliance with the limitations on subcontracting clause to document in the contract file the steps they have taken to ensure compliance and (2) clarify the percentage of work required by an 8(a) participant under indefinite quantity contracts. To improve oversight of tribal firms’ participation in the 8(a) program, we recommend that the Administrator of SBA take the following five actions: 1. As the new 8(a) tracking database is being developed, take steps to ensure that it has the capability to provide visibility to district offices into all tribal 8(a) firms’ activity by tribal entity to ensure compliance with new prohibition to award sole-source 8(a) follow-on contracts to sister subsidiaries; track revenue from tribal 8(a) firms’ primary and secondary industry codes to ensure that subsidiaries under the same parent company are not generating the majority of their revenue from the same primary industry; and track information on 8(a) contracts and task or delivery orders, including orders awarded under basic ordering agreements, to help ensure that district officials have information necessary to enforce the 8(a) program regulations. 2. In light of the new prohibition on awarding 8(a) sole-source follow-on contracts to sister subsidiaries, reinforce to procuring agencies the requirement to provide the full acquisition history of the procurement in the offer letter, when available, and direct district office business development specialists to focus on this issue when they review offer letters for tribal 8(a) firms. 3. Establish procedures to enforce new joint venture rules, including how SBA district officials will ascertain that the 8(a) partner performs the required percentage of the joint venture’s work and, for populated joint ventures, that the non-8(a) partner and its affiliates do not receive subcontracts under the 8(a) contract. 4. Examine relationships between subsidiaries under tribal entities to determine whether practices such as subcontracting to a sister subsidiary or using the past performance of a sister subsidiary to show capability to perform on an 8(a) contract are in line with the business development purposes of the 8(a) program and should be allowed under program rules. If SBA determines that these practices are not in line with the 8(a) program purposes—and to the extent that Congress has not authorized a practice in law—SBA should address them in its regulations. 5. Establish and communicate to Congress the time frame for developing and implementing SBA’s new, planned policy regarding determination of substantial unfair competitive advantage in an industry, and when the policy will be incorporated into the regulations. We provided a draft of this report to SBA; OFPP; the departments of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Justice, Labor, and State; and the Social Security Administration. We received written comments from SBA, which are reproduced in appendix II. SBA did not address our recommendations. OFPP provided comments on our recommendations via email. The Social Security Administration provided technical comments, which we incorporated as appropriate. The other agencies responded with no comment. In written comments, SBA provided background information pertaining to the history of Indian tribes’ and ANCs’ special preferences and their purpose in the 8(a) program. We believe this information is adequately reflected in our report. Although the SBA did not specifically comment on our recommendations, it stated that it will work with us to further strengthen its administration of the 8(a) program. SBA also stated that it will make changes as necessary to continue its efforts to eliminate waste, fraud, and abuse and to ensure that the 8(a) program is operating according to its statutory intent, but did not specify what these actions would entail. In addition, SBA stated that it is fully committed to implementing all of the provisions of its March 2011 regulations, but did not specifically address the issues we raised that may impede such implementation or our related recommendations. SBA also acknowledged the challenges in administering the 8(a) program with respect to tribal entities because the purpose of including tribally owned entities in the 8(a) program can be contradictory to the program’s business development purpose. We recognize in the report that 8(a) businesses owned by tribal entities have special preferences in the program. However, we also note that these entity-owned businesses are subject to the business development purpose of the 8(a) program. This requirement led to our recommendations that SBA determine whether certain practices we found that are currently allowed under the 8(a) regulations—such as firms subcontracting to a sister subsidiary—are consistent with the business development purpose of the 8(a) program. SBA also commented that its foremost concern with our report was our use of a nonprobability sample, with the suggestion that this sampling technique can be biased based on the judgment of the sampler and that we used this technique to generalize results for tribal 8(a) firms. We strongly disagree. Our use of a nonprobability sample was a sound methodological approach to address our reporting objectives. Nonprobability samples are appropriate to provide illustrative examples or to provide information on a specific group within a population. We used this sampling technique to balance a sample that was large enough to provide a sufficiently comprehensive understanding of the issues with one that was small enough to study within our time and resource constraints. Further, we took a number of steps to ensure the factual accuracy of our findings, including traveling to locations where contract files were located so that we had access to the complete available records and the ability to ask follow-up questions as appropriate to ensure that we did not misinterpret or misrepresent any information in the files. Appendix I of the report sets forth the many steps we took to ensure that our contract file and tribal 8(a) file samples were selected in a non-biased, transparent, and objective manner. In accordance with generally accepted government auditing standards, we appropriately state the results of our work in the report, including the clear statement that our results are not generalizable to the population of tribal 8(a) firms. We did not attempt to generalize our results because that approach was not necessary to meet our objectives. In an email response, OFPP generally agreed with our recommendations and with our conclusion that steps need to be taken to provide clarity to the acquisition community regarding limitations on subcontracting. OFPP also noted that steps need to be taken to strengthen the application of these requirements to all small business set-aside programs in FAR Part 19. Regarding our recommendation that OFPP provide guidance on how to monitor the extent of subcontracting, OFPP noted that agency officials other than contracting officers—such as agency offices that perform acquisition management reviews and SBA officials—would also be interested parties. We agreed and modified our recommendation to include “agency officials” and not only contracting officers. OFPP stated that it intends to work with the FAR Council and the Chief Acquisition Officers Council to review the roles of various agency officials and evaluate strategies for monitoring and receiving data about the percentage of work performed by a small business prime contractor. It also stated that, with respect to data collection, it anticipates seeking input from the public on strategies to receive and monitor data regarding the percentage of work performed by small business prime contractors. OFPP added that, in taking this action, it intends to minimize the burden on both small businesses and agencies. OFPP also commented on our recommendation that it take actions to amend the FAR to direct contracting officers to document steps taken to ensure compliance with the limits on subcontracting and to address monitoring requirements for indefinite quantity contracts. OFPP stated that it intends to ask that the FAR Council open a case so that appropriate regulatory refinements may be made to support improvements in the implementation of the limitation on subcontracting. OFPP stated that this action will include reviewing existing clauses that implement the limitation, considering alternatives for collecting information, and documenting steps taken. OFPP also plans to obtain comments from the public, including small businesses, as it develops amendments and evaluates alternatives that can accomplish goals in the least burdensome manner for industry and agencies. Consistent with our recommendation, OFPP plans to clarify in the FAR the percentage of work required by an 8(a) participant under an indefinite quantity contract, but OFPP asked that the recommendation be amended to allow the FAR Council and SBA to work together to determine the best way to clarify this point. We agreed that this would be appropriate and modified the recommendation to reflect this approach. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Labor, and State; the Administrator of SBA; the Attorney General; the Commissioner of the Social Security Administration; and the Acting Director of the Office of Management and Budget. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. The objectives of this review were to (1) identify trends in government 8(a) contracting with firms owned by Alaska Native Corporations (ANC), Native Hawaiian Organizations (NHO), and Indian tribes; (2) determine the reasons federal agencies awarded sole-source contracts to tribal 8(a) firms and the methods used to make price determinations; (3) assess the procuring agencies’ oversight of tribal 8(a) contracts for compliance with subcontracting requirements; and (4) examine the Small Business Administration’s (SBA) new 8(a) regulation, effective March 14, 2011, to determine how the changes could affect oversight of tribal firms and the extent to which previously identified problems are addressed. In this report, “tribal entities” refers to ANCs, NHOs, and Indian tribes. We use the term “tribal 8(a) firm” to refer to a firm that is majority-owned by an ANC, NHO, or Indian tribe. During the course of our work, we also discussed with procuring agency officials the potential impact of the recent Federal Acquisition Regulation (FAR) requirement for written justifications for sole-source 8(a) awards over $20 million. This requirement was not applicable to the contracts we reviewed. We evaluated the administration of the tribal 8(a) program; the scope of our work did not include an evaluation of the program’s merits. To identify the trends in government tribal 8(a) contracting, we analyzed data from the government’s procurement database—the Federal Procurement Data System-Next Generation (FPDS-NG) for fiscal years 2005 through 2010. To assess the reliability of the FPDS-NG we (1) reviewed related documentation, and (2) performed electronic testing on required data fields. We found the FPDS-NG data fields that identify firms owned by ANC, NHO, and Indian tribes to be unreliable because these data were not available during the entire time period. Subsequently, we requested that SBA provide Data Universal Numbering System (DUNS) numbers for 8(a) firms owned by ANC, NHO, and Indian tribes, in addition to mentor-protégé joint ventures that participated in the 8(a) program, for We tested the reliability of these DUNS fiscal years 2005 through 2010.numbers by using them to search for the tribal 8(a) firms and joint ventures in the Central Contractor Registry and SBA’s Dynamic Small Business Search database. We used these systems to verify the data SBA had provided, including to identify additional DUNS numbers that were not included among the data SBA had provided. We also requested additional DUNS numbers from SBA on joint ventures with tribal 8(a) firms; however, the information provided was on all joint ventures in the 8(a) program. To select contracts for our sample that were awarded to joint ventures with at least one tribal 8(a) partner, we identified those that had obligations in fiscal year 2009 and then used the Central Contractor Registry to determine whether the joint venture was listed as owned by an ANC, Indian Tribe, or NHO. Once we substituted the compiled final list of DUNS numbers for the tribal 8(a) data fields, we determined that the FPDS-NG was sufficiently reliable to identify trends in tribal 8(a) contracting for fiscal years 2005 through 2010. We adjusted the obligation data for inflation using a gross domestic product price index with a base year of 2010. To identify the reasons agencies have awarded 8(a) sole-source contracts to firms owned by ANCs, NHOs, and Indian tribes and the methods contracting officials use to determine fair and reasonable price, we selected and reviewed a stratified nonprobability sample of 87 contracts, 7 of which had been competitively awarded. This nonprobability sample was based upon contracts (1) with fiscal year 2009 obligations especially if those obligations exceeded over the competitive threshold,$100 million (fiscal year 2009 data were the most recent at the time) and (2) in locations where multiple tribal 8(a) contracts had been awarded. The majority of the contracts we reviewed (75) were with ANC firms; 10 were with Indian tribes, and 2 were with NHOs. The majority of contracts in our sample (62) were awarded at the Department of Defense (DOD). Our findings from the contract reviews are not generalizable to the population of all tribal 8(a) contracts. We originally selected 90 contracts for review, 10 of which were coded as competitively awarded. In reviewing the source documentation, we found that two of the contracts had been incorrectly coded: one was not owned by a tribal entity and the other was not awarded through the 8(a) program. We eliminated these contracts from our sample. We also found that obligations under one indefinite quantity contract were listed as two separate contracts in our initial sample; therefore, this was counted as only one contract. Another three contracts had been incorrectly coded in FPDS-NG as competitively awarded or as sole-source. These three contracts remained in our sample. The specific locations of the contracts in our review were as follows: DOD: Air Force Metrology and Calibration, Heath, OH National Guard Bureau, Arlington, VA Defense Advanced Research Projects Agency, Arlington, VA Defense Supply Center, Philadelphia, PA Fleet and Industrial Supply Center, Pearl Harbor, HI Fort Sam Houston Army Base, TX Fort Wainwright Army Base, AK Joint Base Elmendorf-Richardson, AK Kirtland Air Force Base, NM MacDill Air Force Base, FL Marine Corps Systems Command, Quantico, VA Naval Facilities Engineering Command, Pearl Harbor, HI Redstone Arsenal Army Base, AL U.S. Army Corp of Engineers locations in Anchorage, AK; Alexandria, VA; Baltimore, MD; Fort Worth, TX; Philadelphia, PA; and Vicksburg, MS U.S. Army Research Development and Engineering Command, Washington Navy Yard, District of Columbia Wright-Patterson Air Force Base, OH Civilian: Department of Agriculture’s Forest Service, New Mexico Department of Energy’s National Nuclear Security Administration- Service Center, New Mexico Department of Health and Human Service’s Centers for Disease Control and Prevention Atlanta, GA; Centers for Medicare and Medicaid Services, Baltimore, MD; and Food and Drug Administration, Rockville, MD Department of Homeland Security’s Bureau of Customs and Border Protection, Federal Emergency Management Agency, and Office of Procurement Operations, Washington, D.C. Department of Justice’s Drug Enforcement Administration, Washington, D.C., and Federal Bureau of Investigation, Chantilly, VA Department of Labor’s Office of Procurement Services, Washington, D.C. Department of State’s Office of Acquisition Management, Arlington, VA Social Security Administration’s Office of Acquisition and Grants, Baltimore, MD. For the contracts in our sample, we examined contract file documentation, including acquisition plans, market research reports, and price negotiation memorandums. However, for three of the contracts we reviewed, one each at the Army Corps, Department of Homeland Security, and the State Department, pre-award information was completely missing from the files. For one of these, we were unable to determine whether or not it had been competitively awarded, as coded in FPDS-NG, because of the missing information. We also interviewed contracting officials, small business advocates, and program officials. To determine the extent to which procuring agencies are overseeing tribal 8(a) contracts for compliance with the 8(a) program’s subcontracting requirements, we reviewed and analyzed documentation for the contracts in our review, including acquisition plans, price negotiation memorandums, contractor proposals, and SBA offer and acceptance letters, in addition to any additional information pertaining to subcontractor monitoring. We also interviewed contracting and program officials, as well as agency small business advocates, about the methods they employ to monitor compliance. Additionally, we reviewed agency- specific guidance or operating instructions, various statutory provisions, the Federal Acquisition Regulation, and Title 13 of the Code of Federal Regulations. We also drew from the findings in our 2006 report on 8(a) contracting with ANC firms. To determine the extent to which SBA’s new regulations could affect oversight of tribal firms’ participation in the 8(a) program and to which previously identified problems have been addressed, we reviewed SBA documents, such as annual reviews and 8(a) program applications, for selected tribal firms. These firms were strategically chosen based upon their parent entity’s (i.e., ANC, NHO, or Indian tribe) representation in our overall contract sample. We selected those firms whose parent entities had higher representation in our sample and those with less representation. Consequently, we examined the files for 49 ANC, 3 NHO, and 10 Indian tribe firms. For the ANC firms, the 49 firms fell under 11 parent entities. The results of our review are not generalizable to the population of tribal 8(a) firms. Moreover, we reviewed SBA regulations, operating procedures and business systems (such as the system used to process 8(a) applications), and interviewed officials at SBA headquarters and the Alaska, Hawaii, Oklahoma, New Mexico, and Washington, D.C., district offices. We also met or spoke with ANC, NHO, and Indian tribe representatives in three “town hall” meetings to explain the scope and methodology for this review. We did not assess the extent to which benefits from tribal 8(a) contracts flow to the parent entity. We conducted this performance audit from October 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. SBA said that we had incorrectly reported that employee-based size standards are calculated over a 3-year period. We disagree; the statement in the report accurately states that the size standards are based on the number of employees or on the average revenues from the previous 3 years. Therefore, no change is needed. SBA also characterized as “not true” our statements that a firm that does not meet its applicable business activity or mix target is not eligible for 8(a) sole-source contracts. SBA noted that if the competitive business mix target is not met, the prohibition on the award of further 8(a) sole-source contracts can be waived. We agree and, to be consistent in how we addressed this issue later in the report, we added the word “generally” the first time it was mentioned to reflect the potential for a waiver. 2. SBA stated that the report relied on anecdotal information and hypothetical scenarios, without analyzing the issue from other perspectives, including those of the tribal and ANC participants, and that this reliance gives a negative view of the participation of tribally owned 8(a) firms. We disagree with this comment. As stated in our report, we did not generalize our findings. Our findings are not anecdotal and did not rely on hypothetical scenarios. Nonetheless, our objectives for this review were to examine SBA’s and procuring agencies’ administration of various aspects of the 8(a) program and not to capture the views of program participants. We provided one illustrative example to give the reader insight into how 8(a) firms— both tribal and nontribal—are able to form relationships with non-8(a) businesses and non-disadvantaged individuals under current SBA regulations. This illustrative example is factually correct in terms of what the current regulations allow. All of our findings are based on criteria for the program as set forth in statute and regulation. 3. SBA stated that, in discussing joint ventures with other-than-small businesses, we provide an overly simplistic explanation and do not explain the benefits of the mentor-protégé initiative within the 8(a) program. Our purpose was simply to discuss this program in the context of one way that 8(a) businesses can grow and develop. We believe this discussion is adequate for the purposes of this report. 4. Regarding the section of our report that discusses challenges SBA will face in implementing parts of its new regulations, SBA stated that its ability to implement the regulations has been “pre-judged” without affording SBA an opportunity to see the results of the regulatory changes, and that there is little mention of the agency’s current initiatives. We disagree. As we stated in our report, SBA has recognized some of the challenges it faces in implementing the new regulations. The report also includes information on SBA’s initial steps to develop a new data collection system and notes that the agency is in the process of re-writing its standard operating procedures. SBA did not provide us with any evidence that it will address the data limitations we identified in our report, which was the basis for several of our recommendations. Because SBA did not comment on our recommendations, the agency’s planned actions remain uncertain. 5. SBA stated that we did not take into account tribal 8(a) entities’ special statutory benefits. We disagree. The background section of our report clearly presents information on these preferences. The existence of the preferences is what allows some firms to, as we state, operate in effect as large businesses in a small business program. 6. SBA stated that we implied that the agency as a whole lacked knowledge in administering entity-owned 8(a) companies, based on our findings at a specific SBA district office. Our point was not what was known at the policy level, but at the implementation level. Our findings were based on interviews with agency officials and file reviews at several SBA district offices, as discussed in appendix I of the report. As our report states, the data system gaps we identified do, in fact, create knowledge gaps across SBA, which led to our recommendation on this issue. SBA did not address our recommendations intended to improve oversight of tribal firms’ participation in the 8(a) program. For example, we recommended several actions SBA could take as it develops its new 8(a) tracking database that may help provide more visibility across district offices. 7. SBA commented that we implied that cost savings could be realized by using procedures other than 8(a). We disagree with this characterization. Our focus in this section of the report was on competitive versus noncompetitive awards, not on 8(a) versus non- 8(a). As we have reported in the past, competition is a cornerstone of the acquisition system and a critical tool for achieving the best possible return on investment for taxpayers. Further, as we explain in the background of this report, once a requirement is awarded as an 8(a) contract, it must remain in the 8(a) program unless the procuring agency decides it would like to fulfill the follow-on requirement outside of the program and requests approval from SBA to do so. In addition to the person named above, Michele Mackin, Assistant Director; Tatiana Winger; Virginia Chanley; Celina Davidson; Julia Kennon; Jeff Malcolm; Kenneth Patton; Sylvia Schatz; Erin Stockdale; Roxanna Sun; and Holly Williams made key contributions to this report.
Federal dollars obligated to tribal 8(a) firms grew from $2.1 billion in fiscal year 2005 to $5.5 billion in 2010, a greater percentage increase than non-tribal 8(a) obligations (160 percent versus 45 percent). Obligations to 8(a) firms owned by Alaska Native Corporations (ANC) represented the majority of tribal obligationsevery year during the period, rising to $4.7 billion in 2010. While tribal 8(a) firms comprised 6.2 percent of total 8(a) firms, their obligations accounted for almost a third of total 8(a) obligations in fiscal year 2010. Over the 6 years, the percentage of competitively awarded obligations to tribal 8(a) firms rose; however, solesource contracts remained the primary source of growth, representing at least 75 percent of all tribal 8(a) obligations in a given year. Consistent with GAO’s 2006 review of ANC 8(a) contracting, contracting officials said that awarding contracts to tribal firms under the 8(a) program allows officials to award sole-source contracts for any value quickly, easily, and legally, and helps agencies meet their small business goals. However, the officials added that the program offices’ push for awarding follow-on contracts to the same firm also plays a role. GAO’s review of noncompetitive tribal 8(a) contracts shows the methods used to determine price reasonableness in a sole-source environment. In some cases, when agencies moved away from sole-source tribal 8(a) contracts toward competition, agency officials estimated savings as a result. To ensure that 8(a) firms do not pass along the benefits of their contracts to their subcontractors, regulations limit the amount of work that can be performed by the subcontractors. Of the 87 contracts in GAO’s review, 71 had subcontractors. GAO found that required monitoring of limitations on subcontracting by procuring agencies was not routinely occurring. Similar to what GAO reported in 2006, some contracting officers do not understand that ensuring compliance is their responsibility under partnership agreements with SBA, and the regulations do not make this clear. Further, agency officials did not know how to monitor subcontracting limitations, particularly for indefinite-quantity contracts, as the data are not readily available. Not monitoring the limits on subcontracting can pose a major risk that an improper amount of work is being done by large firms. In March 2011, SBA revised 8(a) regulations to clarify program rules, correct misinterpretations, and address program issues. Although a positive step, SBA will have difficulty enforcing new regulations pertaining to tribal 8(a) follow-on contracts and joint ventures given the information currently available. SBA told GAO it is currently in the process of developing the requirements for a new 8(a) tracking database. Further, the new regulations do not address some issues GAO has previously raised, such as ANC 8(a) firms under the same parent corporation generating a majority of revenue in the same line of business. SBA regulations do not allow a tribal organization to have more than one 8(a) subsidiary perform most of its work under the same primary business line. GAO also discusses practices that highlight how some tribal 8(a) firms operate, in effect, as large businesses because of their parent corporation’s backing and interconnectedness with sister subsidiaries. SBA has not reviewed these practices to determine whether they are congruent with the business development purpose of the 8(a) program. GAO recommends that the Office of Federal Procurement Policy (OFPP) amend acquisition regulations and provide guidance (including data collection) on monitoring the limits on subcontracting. OFPP generally agreed with the recommendations. GAO’s recommendations also include that SBA include specific capabilities in its 8(a) database to improve tribal 8(a) tracking and that it examine tribal participation to determine whether certain practices align with the 8(a) program’s business development goal. SBA questioned GAO’s methodology, which GAO continues to believe is appropriate, but did not address GAO’s recommendations.
The Army classifies its vehicles on the basis of such factors as function and physical characteristics. For example, tracked vehicles (Abrams Tanks and Bradley Fighting Vehicles) are classified as Army combat vehicles; wheeled vehicles (trucks, automobiles, cycles, and buses) are classified as Army motor vehicles. Within the Army motor vehicle grouping, vehicles are further separated into tactical and non-tactical categories and within the tactical grouping, into light, medium, and heavy classifications based primarily on vehicle weight. The M939 series trucks are accounted for as part of the Army motor vehicle’s medium tactical fleet. The Army reviews operational requirements for its vehicle fleet in an effort to improve readiness. From January 1983 through October 1993, the Army upgraded its 5-ton medium tactical fleet by purchasing about 34,900 M939s to replace aging and obsolete trucks. The new truck, designed to operate on and off road, maintained the basic design of its predecessors but came equipped with such first-time standard equipment as air-brakes and automatic transmissions. At present, the Army has three variations and nearly 40 different models of the M939 in its inventory. Depending on the model, the truck performs multiple duties that include hauling cargo, collecting refuse, transporting troops, and operating as a tractor or wrecker. The last M939s were fielded in late 1993. Should vehicles or equipment prove dangerous or unsafe to operate, the Army Safety Center, Transportation School and Center, and Tank-Automotive and Armaments Command (TACOM) are responsible for identifying problems and disseminating information. Among other duties, the commands collect and evaluate information from accident investigations and field reports. They also issue Army-wide safety alerts, precautionary messages, and other information warning of identified dangers with equipment and vehicles. Our two analyses and the analysis conducted by the Army Safety Center all involved comparisons of different types of accident data collected over different time frames. Nevertheless, all of the analyses showed that the M939 had a higher accident rate than each type of comparison vehicle. In our first analysis, we reviewed data from January 1987 through June 1998 and compared selected M939 accident statistics with those of the rest of the Army motor vehicle fleet. We reviewed the accident categories in terms of “fatal accidents,” defined as any accident event in which at least one occupant of an Army motor vehicle died; “occupant deaths,” defined as the total number of Army motor vehicle occupants killed; “rollovers,” defined as any vehicle that did not remain upright as the result of an accident; and “rollover deaths,” defined as those occurring to occupants of Army motor vehicles that rolled over as a result of an accident. In analyzing this selected accident information compiled by the Army Safety Center, we found the frequency of M939 accidents high in each instance. For the 11-1/2 year period reviewed, the M939 series truck inventory averaged 26,991, or about 9 percent of the average annual Army motor vehicle inventory of about 314,000 vehicles, and accounted for about 15 percent of the total Army motor vehicle accidents. Appendix I shows the actual figures by year, 1987-1998. Our comparison of M939 accident statistics with accident statistics for the rest of the Army motor vehicle fleet showed that the M939 accounted for about 34 percent of all Army motor vehicle fatal accident events, and 34 percent of all Army motor vehicle occupant deaths. Comparative rollover statistics revealed much the same. The M939 rollovers accounted for 17 percent of the total Army motor vehicle rollovers, and 44 percent of the total Army motor vehicle rollover fatalities. Figure 2 shows these accident statistics. In our second analysis, we used Department of Transportation published data for years 1987-1996 and compared the accident rate for M939s with the rate for single-unit medium and heavy commercial trucks (which are physically similar to M939s). According to an agency official, the Department of Transportation defines “fatal crashes” as any event in which someone is killed in a crash—vehicle occupant or otherwise—and “truck occupant fatalities” as a fatality of an occupant of a single-unit truck. These comparisons revealed that the accident rates for the M939 were substantially higher than those found for the commercial trucks. However, Army officials point out that commercial trucks are driven almost exclusively on paved roads; the M939 is driven on both paved and unpaved roads. We found that over the 10-year period, 1987-1996, the frequency rates of fatal crashes per million miles driven for M939s averaged about seven times higher than those for commercial trucks. The M939 accident rate ranged from a high of 12 to a low of 3 times higher than the commercial truck rate. In 1988, the M939’s accident rate was 0.23 and the commercial truck rate was 0.02—about 12 times higher; and in 1992, the M939 accident rate was 0.056 and the commercial truck rate was 0.018—about 3 times higher. Figure 3 shows these statistics. We also found that, over this same 10-year period, the M939 occupant fatality rate averaged about 30 times higher than those for commercial trucks. The M939 occupant fatality rate ranged from a high of 59 to a low of 13 times higher than the commercial truck rate. In 1995, the M939 occupant fatality rate was 0.165 and the commercial truck rate was 0.0028—about 59 times higher; and in 1989, the M939 rate was 0.046 and the commercial truck rate was 0.0035—about 13 times higher. Figure 4 shows these statistics. The Army Safety Center’s analysis reviewed accident data from October 1990 through June 1998. In this analysis, the accident rate of the M939 was compared with accident rates for another series of trucks—the M34/M35 series 2-1/2 ton truck. Army officials advised us that this truck was most comparable with the M939. The analysis reviewed accidents categorized as Class A mishaps. Army Regulation 385-40 defines a “Class A” mishap as an accident where total property damage equals $1 million or more; an Army aircraft or missile is destroyed, missing or abandoned; or an injury and/or occupational illness resulting in a fatality or permanent total disability. Because an M939 costs significantly less than $1 million, almost all Class A mishaps involving an M939 are so classified because they result in a death or permanent total disability. The Army Safety Center’s analysis found accident rates for M939s to be higher than the comparison vehicles. The analysis showed M939 Class A mishap frequency rates per million miles driven to be 3 to 21 times higher than those of similar M34/M35 series 2-1/2 ton trucks. For example, the 1995 Class A mishap rate for the M939 was 0.21 and for the 2-1/2 ton M34/35s, it was 0.01 per million miles driven—about a 21-fold difference. Figure 5 shows this comparison. The Army has initiated a program to improve the M939’s safety performance and, according to TACOM estimates, plans to spend around $234 million for various modifications. Most of the modifications are the direct result of corrective actions suggested in studies. These studies focused on identifying root causes of M939 accidents based on information contained in accident investigation reports. On the basis of the studies’ findings, the Army concluded that the overall truck design was sound but that some modifications were necessary to improve the truck’s safety performance. Planned modifications include $120 million for upgrading the trucks tires, altering brake proportioning specifications, and adding anti-lock brake kits. Other modifications include $114 million to install cabs equipped with rollover crush protection systems and improve accelerator linkage. The modifications, for the most part, will be completed by 2005 with the M939s remaining in service during the process. To identify possible mechanical problems or performance limitations contributing to M939 accidents, the Army conducted two studies and a computer simulated modeling analysis. Although M939 trucks have been in service since 1983, Army Safety Center personnel stated that no aberrant accident statistics appeared before early 1992. However, during 1990-91, with the increased operating tempo associated with Desert Shield/Desert Storm, there was an increase in fatal accidents and deaths attributable to M939s. In August 1992, TACOM issued Safety of Use Message 92-20 discussing M939 performance limitations. This message warned of the truck’s sensitive braking system—specifically that, when the truck is lightly loaded and on wet pavement, aggressive braking could cause rear wheel lockup, engine stall-out, power steering inoperability, and uncontrolled skidding. The Army began taking a closer look at the M939’s accident history after circulating Safety of Use Message 92-20. Between 1993 and 1995, TACOM, the Army Safety Center, and the Army Transportation School and Center initiated a review of M939 accident reports and began putting together evidence that validated the need for the studies. Also, in an effort to reduce the number and severity of M939 accidents, the Army issued Ground Precautionary Message 96-04 in December 1995, limiting M939s to maximum speeds of 40 miles per hour on highway and secondary roads and 35 miles per hour over cross-country roads. Between September 1995 and June 1997, TACOM conducted two studies and a computer simulation analysis. The studies among other things, recreated and analyzed repetitive events cited in many accident investigation reports and discussed in Safety of Use Message 92-20. The two studies and modeling analysis focused on tire and air brake performance under various conditions. On the basis of the project’s findings, TACOM concluded the overall truck design was sound and nothing was significantly different between the M939 and its commercial counterparts produced during the same time period. However, the studies found that improvements to some vehicle subsystems would enhance the truck’s safety performance. The tire study completed in October 1996, together with other information relating to M939 usage, confirmed that the M939s were being used on-road more than originally planned. The original intent was for M939s to be driven on-road 20 percent and off-road 80 percent of the time. In some Army units, especially reserve units, this no longer held true. Some units were using the M939s on-road as much as 80 to 90 percent of the time. The truck’s original tire, designed for maximum efficiency during off-road usage, performed less efficiently on-road, especially during inclement weather. The increased on-road usage enhanced the probability of the M939’s being involved in an accident. On the basis of this scenario, TACOM tested several different tire designs looking to improve on-road traction under all environmental conditions, while retaining required off-road capabilities. The study recommended that all M939s be equipped with radial tires. The brake study, completed in June 1997, concluded that the air brake system may lock up more quickly than drivers expect, especially when the vehicle is lightly loaded. In tests, the Army found that aggressively applied pressure to the brake pedal caused the sequence of events found in many accident reports: wheel lockup, engine stall-out, loss of power steering, and uncontrolled skidding, often culminating in rollover. The probability of spin-out and rollover increased on wet or inclined surfaces. To lessen the likelihood of wheel lockup and the resulting chain of events, the study suggested (1) modification of all brake proportioning systems and (2) installation of anti-lock braking kits. The modeling analysis used computer technology to recreate the truck’s probable behavioral characteristics in a simulated environment and also to validate conditions being tested in the studies. According to TACOM officials, the modeling results correlated with actual testing results compiled during the tire and brake studies. Besides the recommended improvements from the studies, the Army identified others it considered necessary. The Army decided to replace M939 cabs when they wore out with ones outfitted with a rollover crush protection system and also to modify the accelerator pedal resistance on the A2 variant of the M939. Both TACOM and Army Safety Center personnel stated that installation of the reinforced cab rollover crush protection system, while not an industry standard or required by law, would better protect M939 occupants in the event of a rollover. According to TACOM officials, the scheduled M939 modifications will cost around $234 million. The Army estimates that tire upgrades, brake proportioning, and anti-lock brake system improvements will cost about $120 million or about $3,800 per truck; adding cab rollover protection and modifying the A2’s accelerator linkage will cost another $114 million or an additional $3,600 per truck. With respect to the current schedule for completing M939 modifications, brake proportioning and accelerator linkage equipment modifications will be completed by the end of fiscal year 1999; all remaining modifications, except for cab replacement, are scheduled for completion around 2005. Because the truck cabs will be replaced as they wear out, a precise schedule for completing this modification cannot be estimated at this time. Even though some of the M939s have been in service for 15 years, the decision to spend $234 million on modifications and equipment upgrades is based on the need to improve the vehicles’ safety because the Army expects these trucks to be in service for at least 30 years. According to TACOM, the June 1998, M939 inventory was around 31,800 trucks. All M939s will be equipped with radial tires, brake reproportioning, anti-lock brake kits installation, and reinforced cab replacements. However, the accelerator linkage improvements are needed only on the 16,800 A2 variant of the trucks. Table 1 shows the schedule for the planned modifications. Although most scheduled modifications will not be completed until fiscal year 2005 or later, TACOM and Army Safety Center personnel noted that accident rates have declined significantly since the reduced speed limits instituted by the December 1995 precautionary message. Figure 6 shows the drop in the number of mishaps since 1995. Army officials believe the modifications being made to the M939s will improve their safety performance and reduce severe accidents, rollovers, and fatalities. In written comments on a draft of this report (see app. III), DOD stated that it concurred with this report and noted that the report accurately describes problems the Army found to be causing M939 accidents. To analyze the accident history of the M939 series 5-ton tactical vehicle, we obtained specific information from the Army Safety Center, Fort Rucker, Alabama; TACOM, Warren, Michigan; the Department of Transportation, Federal Highway Administration, Washington, D.C.; and the Department of the Army, Washington, D.C. To identify any accident anomalies associated with the M939s, we conducted two analyses and reviewed another conducted by the Army Safety Center. Our first analysis compared selected M939 accident statistics with similar information for the overall Army motor vehicle fleet (of which M939s are a subset). Our second analysis compared M939 accident statistics per million miles driven to Department of Transportation accident statistics for comparable commercial trucks. The Army Safety Center study we reviewed compared various M939 accident frequency rates per million miles driven with rates for comparable military tactical trucks. The number of years used in each comparison varied on the basis of the data available. Army motor vehicle fleet to M939 comparisons did not include events prior to 1987 because some accident statistics were not readily available. Our comparison of rates of M939 fatal accident events and vehicle occupant fatalities with rates for corresponding commercial sector trucks was limited to 1987-1996 due to the unavailability of accident data for commercial sector vehicles after 1996. Lastly, the Army Safety Center study comparing M939 Class A accident rates with rates for other similar Army tactical vehicles only included events occurring between October 1990 and June 1998. The extent to which other factors, such as human error, driver training, and off-road versus on-road usage, may have contributed to disparate accident rates was beyond the scope of this review. To assess Army initiatives directed at identifying M939 performance, mechanical, or systemic problems and limitations, as well as recommended corrective actions, we obtained or reviewed relevant Army studies. We also interviewed officials at the Army Safety Center and TACOM about these studies but did not assess or validate the findings, estimated costs, or recommendations resulting from these studies. Although we worked with personnel from the Army Safety Center, TACOM, Department of Transportation, and the Department of the Army during data gathering and reviewed those results for reasonableness, accuracy, and completeness, we did not validate the accuracy of accident statistics contained in various databases or other published information. However, this data is used to support the management information needs of both internal and external customers and is periodically reviewed internally by each organization for accuracy, completeness, and validity. We conducted our review from July 1998 through February 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable William Cohen, Secretary of Defense; the Honorable Louis Caldera, Secretary of the Army, and interested congressional committees. Copies will also be made available to other interested parties upon request. Please contact me on (202) 512-5140 should you or your staff have any questions concerning this report. Major contributors to this report were Carol R. Schuster; Reginald L. Furr, Jr.; Kevin C. Handley; and Gerald L. Winterlin. Occupant fatalities Million miles driven 49,537. 51,239. 52,969. 53,443. 53,787. 53,691. 56,781. 61,284. 62,705. 63,967. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's M939 series 5-ton tactical cargo truck, focusing on the: (1) extent to which accidents involving the truck have occurred; and (2) results of Army studies on the truck's design and its plans to address any identified deficiencies. GAO noted that: (1) GAO's analyses and an Army analysis indicate a higher rate of accidents involving the M939 series 5-ton tactical cargo truck than other comparison vehicles; (2) GAO's analysis of January 1987 through June 1998 accident data showed that, while M939s made up an average of about 9 percent of the Army motor vehicle fleet during that time, about 34 percent of the fleet's accidents resulting in fatalities of vehicle occupants involved these trucks; (3) 44 percent of accidents that involved a rollover and resulted in fatalities of vehicle occupants involved the M939; (4) GAO's comparison of Department of Transportation accident statistics and M939 accident statistics showed that over a 10-year period, the fatality rate for occupants of the M939 averaged about 30 times higher than the fatality rate for occupants of comparably sized commercial trucks; (5) an Army Safety Center analysis found that the chance of a fatality in a M939 was 3 to 21 times higher than in other similar military trucks in the Army motor vehicle fleet--the M34/M35 series 2 1/2 ton trucks; (6) the Army plans to spend an estimated $234 million on various modifications to improve the M939's safety and operational performance; (7) based on the results of studies into the root causes of M939 accidents, the Army concluded that the overall truck design was sound, but some modifications were necessary; (8) the Army plans to use the $234 million to add anti-lock brake kits, alter brake proportioning specifications, upgrade the truck's tires, install cab rollover crush protection, and modify the accelerator linkage; (9) most modifications will be completed by 2005; and (10) the M939s will remain in service as these modifications are made.
In January 2001, we reported on Department of Defense management challenges and noted that the Department has had serious weaknesses in its management of logistics functions and in particular inventory management. We have identified inventory management as a high-risk area since 1990. In 1996 and again in 1998, we reported that despite billions of dollars invested in inventory, the Navy’s logistics system often could not provide spare parts when and where needed. For example, in fiscal year 1995 about 12 percent of the aircraft were not mission capable due to supply problems, and mechanics frequently had to remove parts from one aircraft to make repairs on another. (See app. I for examples from our prior reports on management weaknesses related to the Navy.) Table 1 shows that during the last 11 years, the Navy has never achieved its overall goal to have 73 percent of its aircraft capable of performing at least one of its assigned missions. Further, the rate at which the aircraft could not perform their missions due to supply shortages has increased from 11.9 percent in fiscal year 1995 to 12.9 percent in fiscal year 2000. Navy officials have testified that the increased pace of operations and the resulting accelerated aging of its systems and infrastructure are outpacing its efforts to improve spare parts supplies and are continuing to affect readiness. As such, the Navy has efforts under way to better define its aviation spare parts requirements. The Navy stated in fiscal year 2000 to the Congress that budget increases for fiscal year 2000 had begun to address some of the Navy’s most pressing needs but that it would take time for the positive effects to be reflected throughout the force. Between fiscal year 1999 and 2000, the Navy increased expenditures for aircraft parts by $631 million. In 1999 the Defense Department announced plans to provide $500 million to the Defense Logistics Agency to purchase spare parts for all the services over fiscal years 2001-2004. The Navy’s and the Marine Corps’ share of that amount is about $190.7 million, of which about $62.1 million had been obligated by February 2001. Further, the Navy and the other services received additional funds in fiscal year 1999 that, unlike the funds cited above, were included in operation and maintenance accounts, including $116 million to eliminate backlogs of aviation spare parts. In a report issued earlier this year, we indicated that current financial information does not show the extent to which these funds were used for spare parts. The Department plans to annually develop detailed financial management information on spare parts funding usage but had not planned to provide it to the Congress. When we recommended that the Secretary of Defense routinely provide this information to the Congress as an integral part of the Department’s annual budget justification, the Department agreed to do so. The aviation systems that we reviewed are vital to the Navy’s achievement of its missions but have had significant parts shortages problems. The EA-6B, shown in figure 1, is an all-weather electronic attack aircraft that operates from aircraft carriers and land bases and is the only Department of Defense aircraft that can electronically jam enemy antiaircraft radar. These aircraft were first delivered in 1971 and have had several major upgrades. These aircraft are heavily deployed for operations and were severely stressed during the 1999 operation in Kosovo. The F-14 Tomcat, shown in figure 2, is an all-weather fighter that operates from aircraft carriers and is designed to attack and destroy enemy aircraft, in both day and night, and is also in high demand for deployed operations. The F-14A was first delivered in 1972. The F-14B and F-14D models consisted of new production aircraft and remanufactured F-14A aircraft and were first delivered in 1987 and 1990, respectively. The F-14 has a critical role in providing air superiority and an ability to launch precision-guided munitions. The Navy uses both consumable and reparable spare parts for its weapon systems. Consumable parts, such as nuts, bearings, and fuses, are discarded when they fail because they cannot be repaired cost-effectively. The Defense Logistics Agency manages most consumable parts, and the Defense Supply Center in Richmond, Virginia, is the lead center for managing aviation consumable parts. Reparable parts are expensive items, such as hydraulic pumps, navigational computers, and landing gear, that can be cost-effectively fixed and used again. The Naval Supply Systems Command, through its Naval Inventory Control Point, manages and provides central control over reparable parts. The shortages of spare parts for the two aircraft systems reviewed not only have affected readiness but also have created inefficiencies in maintenance processes and procedures and have adversely affected the retention of military personnel. Specifically, the rates at which the EA-6B and F-14 were not mission capable due to spare parts shortages ranged from 4.3 percent to 16.8 percent. Also, the maintenance practice used to mitigate part shortages masks the true impact of shortages and results in increased work for maintenance personnel, causing morale problems and dissatisfaction with military life. The Navy EA-6B and F-14 varied in their achievement of mission-capable goals during fiscal years 1993-2000, in part, due to spare parts shortages. The EA-6B met its overall goal of 73 percent only three times during the 8- year period (see table 2). During the same period, the F-14A met its 65- percent goal only twice, in the most recent 2 years; the F-14B met its 65- percent goal in 6 of the 8 years; and the F-14D met its 71-percent goal only once, in fiscal year 2000 (see tables 3-5). Although some models of the F-14 aircraft have improved their mission-capable rates in recent years, the Secretary of the Navy reported that the readiness of deployed forces was being maintained to some degree at the expense of nondeployed forces, which have often deferred ordering spare parts and delayed or reduced the scope of maintenance. The Navy reporting system also identifies whether aircraft are not mission capable due to supply shortages or for maintenance requirements. However, the Navy has not established specific goals related to the categories of not mission capable due to supply or maintenance. As shown in table 6, spare parts shortages have affected the capability of EA-6B and F-14 aircraft to perform their missions. Sometimes unit personnel must wait a long time to receive the parts they have ordered. For example, as of June 2000, the average wait time to fill 229 requisitions for mission-related parts for the F-14 was 185 days; for the EA-6B, the average wait time to fill 20 requisitions for parts was 77 days. To compensate for a lack of spare parts, maintenance personnel sometimes remove usable parts from one aircraft to replace broken parts on others, a practice called cannibalization (see table 7). According to Navy testimony and reports, the Navy is “cannibalizing” nonmission- capable aircraft to keep other aircraft flying and to maintain readiness. While the mission-capable rates of the aircraft that are kept in the air appear to be higher, the practice masks the impact of the shortages and causes morale problems with maintenance personnel because of the extra work involved, wastes consumable parts, and risks damage to the aircraft and its components. Also, a part removed from one aircraft will not last as long as a part from the supply system and will require maintenance sooner. We recently testified that the shortage of parts is the main reason for cannibalizations and that local commanders are willing to do whatever is necessary to keep readiness ratings high, even if this requires cannibalizing aircraft constantly and having personnel routinely work overtime. Cannibalization requires at least twice the maintenance time of normal repairs because it involves removing and installing components from two aircraft instead of one (see fig. 3). As shown in table 7, the aggregate cannibalization rate (the number of times maintenance personnel used the practice per 100 flying hours) for Navy aircraft did not change significantly during fiscal years 1993-2000. The aggregate rates are misleading, however, because cannibalizations are frequently not reported. In 1998 a Navy study group noted that as much as 50 percent of all cannibalizations were not reported. Nevertheless, the reported cannibalization rates for the EA-6B and F-14 were much higher than the aggregate, and the rate for the EA-6B rose significantly in fiscal year 1999, reportedly because of its extensive use during the Kosovo operation. Aside from the reported rates, Navy personnel’s perception is that cannibalization has increased. Of 3,711 personnel surveyed by the Naval Inspector General, 2,932, or 79 percent, reported that cannibalizations had increased and that they did not have enough parts to maintain mission-capable rates needed to meet training and operational requirements. The practice of cannibalizing aircraft burdens maintenance personnel and seriously affects their morale. Cannibalization causes double work, as the maintenance personnel must remove a part from a donor aircraft and install it on another aircraft and later install a replacement part on the donor aircraft. According to maintenance and supply personnel at the units we visited, supply shortages were a significant problem that caused inefficient cannibalizations and expedited repairs. During fiscal year 2000, the Navy reported spending about 441,000 maintenance hours on cannibalizations. The EA-6B and F-14 accounted for about 34,000 and 27,000 of these cannibalization hours, respectively. The effects of inefficient logistics system practices on morale and retention have been noted in several personnel surveys. According to the Naval Inspector General survey, 74 percent of the 3,711 personnel surveyed said that the conditions they work under negatively affected their decision to stay in the Navy. Similarly, as we testified in March 2000, a Department of Defense 1999 survey of active duty members showed that retention problems were concentrated in career fields such as equipment repair. Also, in August 1999, we reported the results of our survey of about 1,000 of the Department’s active duty personnel in job occupations that the Department of Defense believed were experiencing retention problems. We reported that the majority of factors (62 percent) associated with dissatisfaction and reasons to leave the military were work circumstances, including the lack of parts and equipment to perform daily job requirements. Both officers and enlisted personnel ranked the availability of needed equipment, parts, and materials among the top 2 of 44 quality-of-life factors that caused their dissatisfaction. Finally, according to a fall 1998 survey of 114 Navy servicemembers and civilian personnel in the aviation, surface, and submarine communities, over 70 percent of the air community rated spares and repair parts as the area most in need of improvement. In our recent testimony, we discussed examples of how cannibalizations may become the source of waste or frustration. In one case, a major component needed for an EA-6B aircraft to perform its mission was removed from or reinstalled on four different aircraft, for a total of 16 times in 6 days. The primary reasons for shortages of the 50 spare parts for the EA-6B and F-14 aircraft that we reviewed were (1) greater demands than anticipated for the parts, (2) delays in awarding contracts for the purchase and repair of parts, (3) contractors’ delivery delays, (4) delays in repairs at military facilities, and (5) other problems. An internal Department of Defense study found similar reasons for parts shortages. The 50 parts we selected for review were recorded as having the largest number of unfilled requisitions that had affected the capability of the EA-6B and F-14 aircraft to perform their missions. (See app. II for a description of the parts discussed in this report.) Because of the interrelated nature of the supply system, some parts were unavailable for more than one reason. Table 8 is a summary of the reasons for the shortages of the 25 problem parts for each aircraft that we identified primarily through interviews with item management officials and documentation on each part. (See app. III for a more detailed list of the reasons for the parts shortages discussed in this report.) Twenty-one (42 percent) of the 50 sampled parts had greater demands than anticipated that contributed to shortages of the parts. Accurately forecasting the demand for parts is difficult because of the large number of variables that affect demand, including flying hour frequency and environment. The Navy forecasts the demand for parts using an average of historical demands. Although this average is periodically adjusted, it is subject to some degree of error. Forecasting the demand for a new part is often more challenging because the part has not been in the Navy supply system long enough to develop a pattern of demands. Also, according to a Navy supply official, forecasting for parts with infrequent demands is particularly difficult. Examples of parts for which there was unanticipated demand follow: Although the average demand for the EA-6B landing gear (see fig. 4) was about one per quarter (3 months), there were eight demands for the gear during the two most recent quarters. The demand exceeded the stock on hand and contributed to a shortage of the part. The main reason for the increased demand was a new requirement for inspection of the gear. The purpose of these inspections was to reduce part failures and improve reliability during operations. The findings of these new inspections resulted in the replacement of more parts. As of June 2000, one unfilled requisition was affecting the capability of an EA-6B to perform its mission. A new version of an F-14D television sensor (see fig. 5) that was expected to operate 32,000 hours worked much less than anticipated. The increased failure rate and the associated increase in demand were partially attributed to improper installation of the sensor by Navy maintenance personnel. As of July 2000, the Navy was unable to fill 13 requisitions that affected the mission capability of the F-14. An unexpected surge in demand for the F-14 telescoping shaft (see fig. 6), which affects wing control during maneuvers, occurred about March 2000 because of a problem in the shaft that was found during a major engineering change to strengthen the wing. The shaft had severe corrosion from normal use and had to be replaced. The Navy repair facility increased its scheduled number of repairs, but as of July 2000, 11 requisitions were unfilled that affected the capability of the F-14. Sixteen (32 percent) of the parts we reviewed were in short supply due to delays in awarding contracts to repair or produce them and were affecting the capabilities of the EA-6B and F-14 to perform their missions. For example: The Navy had difficulties in locating a company that would produce the aging air navigational computer (see fig. 7) due to obsolescence. The Navy had planned to replace this computer with a newer model as part of an aircraft improvement program that was canceled in late 1994 due to funding constraints. The Navy considered several alternatives and decided that the most economical solution was to contract for a modification of an even older version of the computer to substitute for the current version. The first deliveries of the modified computers are expected in July 2001. As of May 2000, the Navy could not fill two requisitions that affected the capability of EA-6B aircraft to perform their missions. Similarly, the Navy had problems finding a company that would manufacture F-14 transmitters (see fig. 8), creating shortages of the part. These transmitters are designed to transfer signals regarding the aircraft’s movements and position to the appropriate instruments. The Navy had not procured the transmitter for at least 10 years, and potential contractors were reluctant to manufacture the aging part. The only willing manufacture required a minimum purchase of 100 transmitters. Although the contractor had an expected delivery date of July 1999, its transmitter had problems passing a quality test. As of July 2000, the Navy had five unfilled requisitions that affected the capability of F-14 aircraft to perform their missions. The Navy could not find a company to repair an F-14 filter after the previous contractor ceased repair operations in 1995-96. The Navy had not required repairs for several years because it had enough parts on hand to fill the few requisitions it received each year. The previous contractor eventually agreed to reestablish repair capability. However, as of July 2000, four requisitions had been unfilled that affected the capability of F-14 aircraft to perform their mission. Contractor delivery delays contributed to shortages of 15 (30 percent) of the parts we reviewed. Delays in contractor repairs and production of new parts were due to problems with parts passing quality tests, equipment failures, and company buy-outs. The repairs of two types of EA-6B antennas were delayed because the contractor completely halted repair work from December 1999 to about March 2000 due to a company merger. Later, one of these types of antennas had problems passing final quality tests, which caused a shortage of the antenna. As of June 2000, there was an unfilled requisition for each of the two types of antennas that was affecting the capability of EA-6B aircraft. Contractor repairs of an F-14 actuator (see fig. 9), which helps to adjust the aircraft’s wings for takeoff and landing, were delayed for several reasons. The contractor’s test equipment indicated that repaired actuators were faulty when they had actually been properly repaired. Also, the contractor maintained that repairs were delayed because a subcontractor had not made timely repairs to a subcomponent. However, a Navy supply manager told us that during a visit to the contractor’s facility he identified a large number of subcomponents that should have been sent to the subcontractor. This situation contributed to the contractor’s delays in repairing the actuators. As of July 2000, there were nine unfilled requisitions critical to the mission capability of the F-14 aircraft. One company’s buy-out of another company and a later plant move resulted in delayed repairs and deliveries of an F14D wave-guide assembly, creating a shortage of the part. Although the buy-out and the plant move occurred over 2 years ago, deliveries were still slow and below the expected quantity level. The buy-out also delayed the procurement of more F-14 wave-guide assemblies. As of July 2000, there were eight unfilled requisitions for the assembly affecting the capability of the F-14 aircraft. Delays in repairing 12 (24 percent) parts at military facilities caused shortages of the parts. The delays resulted from complications in establishing and sustaining repair capabilities due to maintenance equipment and other problems. Problems with the equipment used to test an F-14 axial pump, which provides power to the aircraft’s flight control system, led to delays. The repair facility did not resolve these test equipment problems until 5 months later, in October 2000. As of July 2000, 21 unfilled requisitions were affecting the mission capability of F-14 aircraft. A military repair facility had problems meeting the repair schedule for an F-14 aircraft wing fairing (see fig. 10) because its manufacture of the parts needed to repair the fairing was delayed. Although the facility was scheduled to repair 10 parts in the third quarter of fiscal year 2000, it repaired only 5. Repair problems continued in the fourth quarter of 2000. The facility was scheduled to repair 13 parts but repaired only 4. As of the end of July 2000, there were nine unfilled requisitions affecting the mission capability of F-14 aircraft. A shortage of EA-6B special indicators developed because the designated repair facility did not repair the items as required. After the closure of one Navy repair facility, repair responsibility for the indicators was transferred to a different facility. However, this facility never developed the capability—that is, the parts, equipment, expertise, and staff to repair the indicators. In the third quarter of fiscal year 1999, the facility was scheduled to repair six indicators but repaired none. After discovering the problem, the item manager had the items repaired by a contractor. As of May 2000, there was one unfilled requisition that was reportedly affecting the capability of an EA-6B aircraft in performing its missions. Other reasons for shortages of parts included decisions not to purchase needed parts for economic reasons and nonrecurring problems such as a pricing error. These varied reasons contributed to spare parts shortages for seven (14 percent) of the parts we reviewed. Sometimes, item managers made economical decisions not to purchase additional items because the parts were to be replaced. For example, the item manager purchased minimal quantities of an EA-6B multiport panel because the Navy had decided to redesign the panel as part of an overall engineering change to the aircraft. A shortage of the panels developed while the redesign was taking place. As of June 2000, two unfilled requisitions for the multiport panel were affecting the capability of the EA-6B aircraft to perform their missions. Also, an error in the contract pricing structure for repairs of an F-14 power module (see fig. 11) resulted in spare parts shortages. During an evaluation of the requirements for these parts, the item manager identified an error that would have resulted in customers not being charged the full cost of repairs. The award of the contract and the associated repairs were delayed while the contract pricing problem was corrected. As of July 2000, four unfilled requisitions were keeping F-14 aircraft from performing any of their missions. An internal study conducted by the Department of Defense found similar reasons for Navy reparable parts shortages. The study examined parts causing aircraft to be not mission capable and found that there were two reasons for the shortages. The first was an insufficient inventory of certain reparable parts. The second was that although there were enough parts in the system, other constraints prevented the repair facility from repairing the items in a timely manner. The study states this may happen for several reasons. The parts may not have been returned from the units to the repair facility, the repair facility may have lacked capacity in certain key areas such as repair equipment, the consumable parts required to fix the repairable item may not have been available, and item managers may not have requested the repair facility to repair the part because of a lack of funding. The study recommended that the Navy budget include an additional $355 million for fiscal years 2004 through 2007 to help address the inventory shortages. According to a Navy official, the Navy agreed and included an additional $357 million in its budget. The Navy and the Defense Logistics Agency have initiatives under way or planned that may improve the availability of parts, including the use of best commercial inventory practices. The initiatives are intended to improve the efficiency and effectiveness of the logistics system and generally address the specific reasons for the shortages identified by our review. Under a March 2000 Department of Defense directive, the Navy developed a High Yield Logistics Transformation Plan, which links its logistics initiatives to the objectives in the Department’s Logistics Strategic Plan. The directive requires that these plans include a management framework that conforms to Government Performance and Results Act requirements. We have, in the past, made various recommendations to address this issue. We will be reviewing the transformation plan’s initiatives, once they are more fully developed, to evaluate their likely effectiveness and to assess whether additional initiatives are needed. We describe some of the Navy and Defense Logistics Agency initiatives in the sections that follow. The Navy’s High Yield Logistics Transformation Plan and its schedule of best commercial inventory practices identify many initiatives that generally address the reasons for spare parts shortages that we identified, such as contract and repair problems. While some have been implemented, many of these initiatives are now being implemented and it is too soon to tell whether they will effectively reduce aviation spare parts shortages. The Navy’s performance-based logistics program is designed to improve support to customers and reduce total costs. The program is to use a variety of different long-term, performance-based contracts that will hold contractors accountable for specific performance requirements, including delivery times, at a cost that is at or below current system costs. Although the scope of each contract is somewhat different, the purpose of each is to solve problems with the unavailability, low reliability, and obsolescence of parts. Many of these contracts will provide an incentive to a contractor or require reliability improvements to ensure that the best product is delivered on time. These contracts also may require a contractor to preempt and solve problems due to the obsolescence of parts. The Navy will prioritize systems to be included under this program based on high repair costs, low reliability, and low availability of the systems. The Navy plans to assess the success of this program by measuring the time it takes a contractor to fill a requisition and the percentage of the time a contractor can satisfy a requirement within contractually specified times. Under another initiative, the Navy manages the parts but uses long-term contracts, with performance periods of up to 5 years, to minimize the time it takes to request and receive parts from contractors. These contracts allow contractors to procure material ahead of time to reduce their production times and reduce the Navy’s administrative times. For fiscal year 2000, the Navy reported that these long-term contracts had accounted for over 30 percent of its funds for contracts and had procurement times of only 35 days compared to 89 days for other types of contracts. The Navy plans to monitor this initiative and expects long-term contracts to reduce the Navy’s inventory and increase readiness. The Navy has, among others, the following initiatives designed to improve its aviation repair facilities operations, including a reduction in repair times: The Navy established business process teams for material management, planning and scheduling, and the repair of system components at aviation repair facilities. The three teams have developed processes designed to improve operations and they are to be implemented at the three Navy repair facilities by June 2006. As part of this effort, Navy depots are working with the Defense Logistics Agency to requisition material for repairs in advance of actual demand, based on a credible forecast. The Navy expects this effort to reduce the repair times and costs, improve readiness, reduce inventories, and annually save $39 million by fiscal year 2005. The Navy plans to use an automated system to provide planning, scheduling, capacity, and other information to reduce repair cycle times and improve the rate at which customer delivery dates are met. The Navy’s goal is to fully implement the system at its three repair facilities by September 2002. The Navy plans to reduce the time it takes to transport inoperable items from units to repair facilities, especially for parts that are in short supply. As of June 2000, implementation of this initiative had been delayed due to problems in implementing a reporting system that accounts for material in transit between the receiving and sending points. The Navy has several broad-based initiatives that may reduce spare parts shortages. One of these is the aviation supply chain/material management initiative. The Navy expects this initiative will improve forecasting for the demand of parts and repair planning. Other features of this initiative include better tracking of inoperable items and the potential for automatic induction of parts into the repair cycle. The Navy plans to test the new process on the E-2C aircraft starting in December 2001. If the pilot proves successful, the Navy plans to expand the initiative to all Navy weapon systems. Estimated costs are $80 million per year from fiscal year 2002 until the break-even point during fiscal year 2006. Performance measures and baseline data will be developed after July 2001. Other planned logistics system process improvements include the following: The Aviation Maintenance-Supply Readiness Study Group, chartered in March 1998, is to identify specific actions to improve readiness and develop systemic improvements to increase mission capability rates. The group is addressing problems such as the cannibalization of aircraft parts, the time that repair facilities take to repair and return parts, and reliability problems. The Department of Defense is planning to use the time that customers wait for parts as a key measure for evaluating the overall effectiveness of the logistics system. As such, the Navy intends to track the time it takes from the ordering of a part to its delivery, develop a strategy for improving the timeliness of the process at different shore facilities and deployment sites, and then optimize the Navy’s investment in spare parts. The Navy plans to track items by serial number so that it can better measure reliability, predict parts requirements, identify maintenance deficiencies, develop solutions, improve readiness, decrease repair time, and manage warranties. This initiative is expected to cost $8.5 million but achieve a return on investment of $30 million per year plus labor savings of about 20,000 hours per year. The Defense Logistics Agency’s major initiative to reduce aircraft spare parts shortages is its Aviation Investment Strategy. This initiative, which started in fiscal year 2000, focuses on replenishing consumable aviation repair parts identified as having availability problems that affect readiness. To achieve this initiative within the Navy, the Defense Logistics Agency plans to invest about $190.7 million in Navy and Marine Corps aviation spare parts over fiscal years 2000-2003. As of February 2001, $62.1 million had been obligated for this purpose, but only $9.9 million worth of parts had been delivered. The purpose of the Defense Logistics Agency’s Aging Aircraft Program is to consistently meet the customers’ needs regarding the availability of spare parts for Army, Navy, and Air Force aviation weapon systems. The program’s focus will be to (1) provide inventory control point personnel with complete, timely, and accurate information on current and projected parts requirements; (2) reduce customers’ wait time for parts for which sources or production capability no longer exist; and (3) create an efficient and effective management structure and processes for achievement of program goals. The Defense Logistics Agency plans to spend about $20 million on this program during fiscal years 2001-2007. To provide a mechanism to improve the potential for successfully implementing commercial inventory initiatives and measure results, we recommended in October 1999 that the Secretary of Defense direct the Secretary of the Navy to improve the management framework for implementing best practice initiatives based on principles embodied in the Government Performance and Results Act. The Department of Defense concurred and stated that the Navy would provide an update in the first quarter of 2000. The Navy’s updated schedule links its commercial inventory practice initiatives to the broad objectives of the Department of Defense’s Logistics Strategic Plan. We also recommended in June 2000 that the Department develop an overarching plan that integrates the individual military service and defense agency logistic reengineering plans to include an investment strategy for funding the initiatives and details on how the Department plans to achieve its final logistics system goals. The Department agreed with the recommendation and stated it plans to integrate the various logistics strategies and service initiatives. Further, as required by the House Committee on Armed Services report on the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001, we are assessing the methodology the Department of Defense used in formulating its August 1999 long-range Logistics Strategic Plan. Because of our prior recommendations on improving the Navy’s management framework for implementing commercial inventory practices, the Department of Defense’s efforts to develop an overarching integration plan, and our ongoing review of the Department’s strategic plan, we are not making new recommendations at this time. In written comments on a draft of this report, the Principal Assistant, Deputy Under Secretary of Defense for Logistics and Material Readiness indicated that the Department of Defense generally concurred with the report. The Department’s comments are reprinted in their entirety in appendix IV. To determine the impact of shortages of spare parts for two selected aircraft, we reviewed April 1999 through December 2000 Department of Defense Quarterly Readiness Reports to Congress; Navy mission-capable goals and rates and the rates of not mission capable due to supply and maintenance problems for fiscal years 1993-2000; and demand and unfilled requisition data for major aircraft systems for March and June 2000 from the Naval Inventory Control Point-Philadelphia, Operations Directorate. We also discussed supply and maintenance issues with weapon system program managers at the Naval Air Systems Command. We did not independently verify the readiness and other data. We also visited maintenance and supply officials at the Naval Air Station, Oceana, Virginia Beach, Virginia, and the Second Marine Air Wing, Cherry Point, North Carolina. To determine the reasons for shortages of mission-related spare parts for the EA-6B and the F-14, we reviewed requisition data at the Naval Inventory Control Point-Philadelphia and judgmentally selected 50 parts that affected the capability of the two aircraft to perform their missions. These parts had the largest number of unfilled requisitions at the time of our visit: the end of May and June 2000 for the EA-6B and the end of July 2000 for the F-14. We interviewed the managers responsible for each selected part. To obtain customer views of critical parts problems, we also attended F-14 and EA-6B supply conferences. To help validate the reasons inventory managers provided for the parts shortages, we reviewed inventory management documents such as the March 2000 stratification reports, the 5-year demand history, and other relevant supply management documentation, including repair facility production schedules and completion data for the fourth quarter of fiscal year1998 through the fourth quarter of fiscal year 2000 from the Naval Inventory Control Point- Philadelphia Industrial Support Division. To identify initiatives that the Navy and the Defense Logistics Agency have under way or planned to address spare parts shortages for all aircraft, we interviewed Navy and Marine Corps headquarters officials and examined relevant documentation. Specifically, we reviewed the Navy’s Logistics Transformation Plan for fiscal year 2000 and the Navy and Marine Corps reports on the best commercial inventory practices. We also discussed various initiatives with Naval Supply Systems Command and Naval Inventory Control Point officials. We reviewed our prior reports and relevant Navy and Department of Defense reports and studies, including those published by the Naval Inspector General, the Navy’s Aviation Maintenance-Supply Readiness Study Group, and the Office of the Secretary of Defense for Program Analysis and Evaluation. During our audit, we interviewed supply and maintenance officials and obtained information from the following locations: Deputy Under Secretary of Defense for Readiness (Personnel and Readiness), Arlington, Virginia. Joint Chiefs of Staff, Logistics Directorate, Arlington, Virginia. Joint Forces Command, Logistics Directorate, Norfolk, Virginia. Deputy Chief of Naval Operations, Fleet Readiness and Logistics, Arlington, Virginia. Commander in Chief, U.S. Atlantic Fleet, Logistics Directorate, Norfolk, Virginia. Commander, Naval Air Forces Atlantic Fleet, Logistics Directorate, Norfolk, Virginia. Naval Supply Systems Command, Mechanicsburg, Pennsylvania. Naval Inventory Control Point, Philadelphia, Pennsylvania. Naval Air Systems Command, Patuxent River, Maryland. Naval Air Station, Oceana, Virginia Beach, Virginia. Marine Corps Headquarters, Aviation Supply Logistics, Arlington, Virginia. Marine Corps Forces, Atlantic, Norfolk, Virginia. Second Marine Air Wing and Squadrons, Cherry Point, North Carolina. Defense Logistics Agency Headquarters, Alexandria, Virginia, and Defense Supply Center Richmond, Richmond, Virginia. We performed our review between February 2000 and June 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Navy; the Commandant of the Marine Corps; the Director, Defense Logistics Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Lawson Gist, Jr.; Dan Omahen; Tracy Whitaker; and Nancy Ragsdale. Our high-risk series of reports over the past several years noted that the Department of Defense inventory and financial management weaknesses have contributed to the unavailability of parts when needed. In January 2001, we reported on Department of Defense management challenges and noted that it has serious weaknesses in its management of logistics functions and, in particular, inventory management. Although not specifically identified with the systems we reviewed, these management weaknesses directly or indirectly contribute to the shortages of spare parts the Navy is facing. For example: We reported in January 2001 that nearly half of the Department’s inventory exceeded war reserve or current operating requirements and that the Department had inventory on order that would not have been ordered based on current requirements.Thus, the Department was purchasing items that exceeded requirements with funds that could be used to purchase needed parts. We have issued several reports on the Navy’s problems in maintaining adequate oversight of material being shipped to and from military activities. For example, in March 1999, we reported that during fiscal years 1996-98, the Navy reported losing accountability of in-transit inventory, including some classified and sensitive items, worth over $3 billion. In August 2000, we reported that the Navy had reported on actions that we believed would improve in-transit inventory management once fully implemented. Some of the corrective actions had an estimated completion date of December 2000, while a long-term solution would be to reengineer the entire in-transit process. In November 2000, we reported that the Navy’s processes for setting prices that customers pay for aviation spare parts had led to the Navy’s seeking supplemental appropriations and delaying the procurement of needed parts that could affect readiness. In addition, the Department of Defense’s long-standing financial management problems may also contribute to the Navy’s spare parts shortages. As we recently reported, existing weaknesses in inventory accountability information can affect supply responsiveness.Lacking reliable information, the Department of Defense has little assurance that all items purchased are received and properly recorded. The weaknesses increase the risk that responsible item managers may request funds to obtain additional unnecessary items that may be on hand but not reported. Major Management Challenges and Program Risks: Departments of Defense, State, and Veterans Affairs (GAO-01-492T, Mar. 7, 2001). Tactical Aircraft: Modernization Plans Will Not Reduce Average Age of Aircraft (GAO-01-163, Feb. 9, 2001). Major Management Challenges and Program Risks: A Governmentwide Perspective (GAO-01-241, Jan. 2001). High-Risk Series: An Update (GAO-01-263, Jan. 2001). Defense Acquisitions: Prices of Navy Aviation Spare Parts Have Increased (GAO-01-23, Nov. 6, 2000). Defense Acquisitions: Price Trends for Defense Logistics Agency’s Weapon System Parts (GAO-01-22, Nov. 3, 2000). Defense Inventory: Status of Navy Initiatives to Improve Its In-Transit Inventory Process (GAO/OSI/NSIAD-00-243R, Aug. 24, 2000). Contingency Operations: Providing Critical Capabilities Poses Challenges (GAO/NSIAD-00-164, July 6, 2000). Defense Inventory: Process for Canceling Inventory Orders Needs Improvement (GAO/NSIAD-00-160, June 30, 2000). Defense Logistics: Actions Needed to Enhance Success of Reengineering Initiatives (GAO/NSIAD-00-89, June 23, 2000). Defense Inventory: Plan to Improve Management of Shipped Inventory Should Be Strengthened (GAO/NSIAD-00-39, Feb. 22, 2000). Department of the Navy: Breakdown of In-Transit Inventory Process Leaves It Vulnerable to Fraud (GAO/OSI/NSIAD-00-61, Feb. 2, 2000). Defense Inventory: Opportunities Exist to Expand the Use of Defense Logistics Agency Best Practices (GAO/NSIAD-00-30, Jan. 26, 2000). Defense Inventory: Management of Repair Parts Common to More Than One Military Service Can Be Improved (GAO/NSIAD-00-21, Oct. 20, 1999). Military Operations: Some Funds for Fiscal Year 1999 Contingency Operations Will Be Available for Future Needs (GAO/NSIAD-99-244BR, Sept. 21, 1999). Department of Defense: Status of Financial Management Weaknesses and Actions Needed to Correct Continuing Challenges (GAO/T-AIMD/NSIAD-99-171, May 4, 1999). Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework (GAO/NSIAD-99-40, Apr. 12, 1999). Defense Reform Initiative: Organization, Status, and Challenges (GAO/NSIAD-99-87, Apr. 21, 1999). Defense Inventory: Status of Inventory and Purchases and Their Relationship to Current Needs (GAO/NSIAD-99-60, Apr. 16, 1999). Defense Inventory: Continuing Challenges in Managing Inventories and Avoiding Adverse Operational Effects (GAO/T-NSIAD-99-83, Feb. 25, 1999). High-Risk Series: An Update (GAO/HR-99-1, Jan. 1999). Major Management Challenges and Program Risks: Department of Defense (GAO/OCG-99-4, Jan. 1999). Navy Inventory Management: Improvements Needed to Prevent Excess Purchases (GAO/NSIAD-98-86, Apr. 30, 1998). Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector (GAO/NSIAD-98-8, Mar. 31, 1998). Defense Inventory: Management of Surplus Usable Aircraft Parts Can Be Improved (GAO/NSIAD-98-7, Oct. 2, 1997).
The military's ability to carry out its mission depends on having adequate supplies of spare parts on hand for equipment maintenance. Shortages are a key indicator of whether the billions of dollars spent on these parts each year are used effectively, efficiently, and economically. The Navy has acknowledged in recent years that its aviation systems have significant readiness and supply problems. Since 1990, GAO has included Defense Department (DOD) inventory management, including spare parts, on its list of government functions at high risk for waste, fraud, abuse, and mismanagement. This report reviews (1) the impact of shortages of spare parts for two selected aircraft--the EA-6B Prowler and F-14 Tomcat, (2) the reasons for the shortages, and (3) the initiatives that the Navy and the Defense Logistics Agency have in place or planned to address overall spare part shortage issues. GAO found that spare parts shortages for the two aircraft have harmed Navy's readiness and the economy and efficiency of maintenance activities. Spare parts shortages have contributed to problems retaining military personnel. Navy managers attributed the spare parts shortages to the fact that more parts were required than the Navy originally anticipated and that problems arose in identifying, quantifying, or contracting with a private company to produce or repair the parts. The Navy and the Defense Logistics Agency have many logistics initiatives planned or under way to improve the logistic system and alleviate shortages of spare parts. The initiatives include best commercial inventory practices and generally address the causes GAO identified regarding spare parts shortages.
Innovation is a dynamic process through which problems and challenges are defined, new and creative ideas are developed, and new solutions are selected and implemented. It is also a complex process that involves taking iterative steps to solve problems. Innovation requires an environment that encourages participants to challenge traditional practices without fear of repercussions. Ideally, innovation participants are empowered to be creative and make mistakes, and appropriate risk- taking is not only tolerated but encouraged. Some federal leaders are trying various innovation tools, including on-line idea submission programs, competitions, and prizes, as ways of unleashing employee creativity. For example, in a memorandum issued in March 2010, the administration urged federal agencies to use challenges and prizes to crowdsource innovative approaches to government initiatives and programs. At relatively low costs, crowdsourcing initiatives can garner valuable and creative solutions that may not have come through traditional means. As another example, the Presidential Innovation Fellows program pairs top innovators from the private sector, nonprofit organizations, and academia with top innovators in government to collaborate during focused six- to thirteen-month periods. The program aims to develop solutions that can save lives, save taxpayer money, and fuel job creation. For example, the goal of one of the program’s projects is to identify information critical to saving lives and mitigating damage in a disaster. Even with the efforts of some federal leaders to encourage innovation, federal government-wide scores tracking how agencies foster and reward employee innovation dropped in 2013 for the second year in a row. OPM’s 2013 Federal Employee Viewpoint Survey, released in November 2013, found that only 35 percent of federal workers believe that creativity and innovation are rewarded, with positive responses in this area showing a steady decline of six percentage points over the past three years. Research suggests that half of all innovations are not initiated by organizational leaders. Instead, research shows that it is important to have processes for gathering stakeholders’ and front-line workers’ views to identify areas for possible improvement. As an innovation tool, labs are based on the idea that the competencies needed for systematic innovation—such as intelligent risk-taking to develop new services, products and processes—are not the same as those required for daily operations. Innovation labs seek to provide approaches, skills, models, and tools beyond those that most employees are trained in and use to do their work. In addition, public sector innovation labs can be viewed as attempts to create an organizational response to a range of challenges to innovation, as innovation efforts face unique obstacles in the public sector. For example, funding for new public ventures is limited, and the risks of innovation are high in government. A defining characteristic of the public sector is that it is subject to broad scrutiny, so that when an innovation fails or is less than a complete success, there is the prospect of political consequences. With constrained budgets expected to continue into the foreseeable future, innovation in our public services is a necessity. In the last decade, many public sector organizations around the globe have set up facilities with the explicit purpose of supporting innovation efforts. For example, Denmark’s MindLab, started in 2002, is a cross-governmental innovation unit that is part of the country’s Ministry of Business and Growth, the Ministry of Education, the Ministry of Employment, and Odense Municipality, and which collaborates with the Ministry for Economic Affairs and the Interior. The group covers broad policy areas including areas such as entrepreneurship, digital self-service, education, and employment. OPM’s lab was modeled, in part, on Denmark’s MindLab. OPM officials, consistent with other innovation lab representatives we interviewed, maintained that, unlike a typical conference room, innovation labs can be easily reconfigured for large groups and smaller breakout sessions. They allow users to write on walls and preserve visual artifacts more easily than typical cubicles and traditional office space. This can be done with very low-tech tools such as markers and a whiteboard. Figure 1 shows a view of OPM’s lab. Organizations with different missions are pursuing a lab-based strategy to foster innovation. For example, organizations—including OPM—use their labs as a space where participants can conceptualize and prototype new products or processes outside their normal environment. Many also use their labs as a teaching space where participants can exchange ideas and information through classes, workshops, presentations, or other events. Figure 2 shows how innovation labs we surveyed from the public, private, and nonprofit sectors share common design elements, and how these different organizations generally use their labs for multiple and similar purposes. Based on OPM documents, the innovation lab’s start-up costs totaled approximately $1.1 million including facility upgrades and construction, equipment, and training and other personnel costs. (See table 1 for a breakdown of costs.) In building the lab, OPM worked with the General Services Administration (GSA) and contracted with both design and architectural firms to renovate a former storage room in the sub-basement of its headquarters building. The 3,000 square foot renovated space presents an open layout with a meeting area for up to two dozen people and is surrounded by breakout areas and team rooms. The physical renovation of the facility was completed in March 2012, after the installation of final technology equipment, asbestos abatement, and enhancements to ventilation and life-safety systems. According to OPM, to make the space useful for any purpose, much of the funding for the improvements and construction of the space would have been required. OPM officials said that in fiscal year 2013, the lab’s total operating budget, including all contracting costs, was $476,000, which supported a build up to 5.5 full-time equivalent (FTE) employees over the last seven months of fiscal year 2013. Officials expect this amount will remain stable in the coming fiscal years, proportional to a full fiscal year. Operational responsibility for the innovation lab has been assigned to OPM’s Employee Services Division and is managed by the agency’s Deputy Associate Director of Strategic Workforce Planning. According to OPM officials, since February 2013, the lab has grown from 1 FTE to roughly 6 FTEs. Specifically, as of the end of summer 2013, day-to-day operations in the lab are carried out by 4 FTE staff members, 1 FTE intern, 1 part-time intern, and 1 part-time staff member whose time is divided between the innovation lab and OPM’s Resource Management Office. A core group of staff from OPM’s Employee Services Division have been trained in human-centered design and also contribute up to 15 percent of their time in the lab. According to OPM officials, the lab reached its maximum fiscal year 2013 funding level of 5.5 FTEs in July 2013. A brief description of each position is provided in table 2. OPM has taken a phased approach to developing the lab programming (activities taking place in the lab) and the policies governing lab use (such as priority-setting policies for lab projects). According to OPM documents, each phase has incorporated an element of experimentation, review, and a shift in strategy based on lessons learned. Phase I lasted from March through June 2012. During these first 4 months after the lab was built, OPM made the space available to the OPM workforce for meetings and events. OPM leadership also used this time to investigate an appropriate problem-solving approach to pair with the lab that would be consistent with approaches used by other labs; they determined that a human-centered design approach and curriculum would complement OPM employees’ technical expertise and analytic competencies. OPM also began to recruit interested staff from the Employee Services Division to be trained in human-centered design fundamentals: this staff would then support project sessions in the lab as part of their collateral duties. Phase II lasted from July 2012 through March 2013 and consisted mostly of facilitated sessions with OPM project teams. During these sessions, Employee Services staff worked with project teams to generate ideas to long-standing problems through exercises such as project or strategic planning, brainstorming sessions, or stakeholder mapping aimed at discussing and testing potential solutions. Topics discussed during these sessions involved a variety of initiatives directed at improving OPM processes and addressing government-wide human resources challenges. OPM officials noted that innovation lab projects have included, among others, designing an implementation plan with other federal agencies to collect valid, accurate, and timely data on the federal cyber security workforce; updating the government-wide strategy for veterans recruitment; attracting and retaining individuals with talent in science, technology, engineering, and mathematics disciplines; and specific challenges unique to individual agencies. Phase III lasted from April through November 2013. In Phase III, OPM continued to provide facilitated design sessions and in some cases, follow-on coaching to program offices from within OPM and OPM-led projects. For example, a facilitated design session included lab staff working with the Food and Drug Administration’s (FDA) Battery Working Group to more effectively engage with the group’s external stakeholders. According to an OPM case study about the lab, eight FDA employees attended the Fundamentals of Human-Centered Design course. Following the course, lab staff provided planning support for a public workshop with over 200 participants from stakeholder groups including medical device and battery manufacturers, other regulatory groups, and hospital staff; lab staff also attended the Battery-Powered Medical Device workshop to support the FDA team in their use of design methods. According to an FDA participant, their collaboration with the lab helped them engage in stakeholder dialogue that would not have been otherwise possible. OPM lab staff also began to offer classes in the lab designed to develop mission-critical skills federal workers need to become better problem solvers. The OPM lab offers courses such as Human-Centered Design Fundamentals, Prototyping in the Public Sector, and Communicating Visually, among other topics. These classes are available to OPM staff and other federal workers. Staff also made the lab available for federal communities of practice to convene. According to lab staff, the lab is becoming a hub for a number of standing meetings of a growing community of federal innovators and innovation communities of practice. Table 3 presents a summary of OPM’s human-centered design lab activities since its inception. Lab staff report that in the future they intend to expand from their session- based work and targeted design support projects, such as those consultative sessions that took place in the lab during Phase III. While some of these more episodic projects may continue to occur, the lab’s focus will be on creating and establishing large-scale projects typically involving stakeholders from either wholly within OPM or across different agencies that are working on crosscutting issues. As discussed later in the report, projects appropriate for this design method would have diverse users, be more complex, and be called immersion projects. These would be the most structured activities undertaken in the lab, characteristically being longer-term activities that could take up to six months of intense collaboration with project owners and a diverse group of stakeholders. We identified a common set of challenges that can undermine organizations’ efforts to use innovation labs and a set of prevalent practices that the organizations employ to address these challenges and support their labs’ success and sustainability. OPM has incorporated some of these practices, such as pairing a distinct space with a structured approach to problem-solving, but has not implemented others, such as developing meaningful performance measures. Although OPM has begun reaching out to other federal innovators, the agency has not fully leveraged the experience of other agencies employing similar approaches. As a prevalent practice for encouraging and supporting greater innovation, both the literature and representatives from the organizations we reviewed stressed the benefits of pairing a dedicated physical space with a structured framework rooted in design-thinking principles. Many of the lab representatives said building or establishing a distinct space carries an important symbolic value as it signals an organization-level commitment to a culture that supports innovation. However, simply building a lab is not sufficient to change an organization’s culture; it is necessary to also introduce a new framework for problem solving. Although these organizations use different terminology to describe their selected frameworks, such as agile development and human-centered design, the general principles are similar. They include placing users at the center of the desired solution—research on successful innovation practices shows the importance of engaging customers and understanding their needs. Further, they include extensive collaboration with relevant stakeholders, experimentation, prototyping, and iterative steps to find a solution. A primary objective of this approach is to allow for failure in the beginning of the design cycle, so that organizations can manage and learn from early mistakes, rather than try to recover from an expensive, comprehensive failure upon implementation. For example, Census and HUD have similar problem-solving frameworks for lab use. As figure 3 shows, the Census Center for Applied Technology and the HUD Innovation Lab rely on a five-step framework to guide innovation in their labs. As another example, although CFPB does not have a dedicated physical space, it used a similar framework to develop its on-line mortgage disclosure form. According to CFPB’s Creative Director for Technology and Innovation, designers interacted with end-users including mortgage applicants, prototyped different forms, and made refinements based on continual feedback before launching the new form. On its website, CFPB describes its design process in detail, including prototyping and feedback sessions with consumers, lenders, and mortgage brokers. OPM’s lab provides a menu of design services to meet the specific needs of various projects. The lab’s larger-scale immersion project work will involve taking a complex problem through OPM’s problem-solving framework, which encompasses steps for problem framing to learning about users to analysis to concept development, testing, and rapid iterative steps. This problem-solving framework is similar to those employed by other innovation labs. As discussed earlier in this report, similar to other organizations with labs, OPM is using its lab for a variety of purposes including as a learning space for classes on human-centered design principles and techniques and as a meeting space for interagency task teams and communities of practice. As originally envisioned in its strategic and performance plans, the lab was designed to host a mix of activities rooted in the human- centered design approach, including longer-term design challenges. Moreover, OPM lab staff asserted that to gain an organization’s confidence and to instill a culture of innovation, it is necessary for a successful innovation lab to have an array of sufficiently compelling projects that demonstrate how the lab approach can lead to performance improvements. Based on our interviews with public and private sector organizations with similar innovation facilities, larger-scale problem-solving projects were common activities in their innovation labs’ service portfolios. As an example, Denmark’s MindLab has contributed to tackling several pressing social issues including simplifying the process for managing claims related to industrial accidents and shortening the time before injured workers return to the market. Opportunities to showcase a new approach to problem solving reduce the likelihood that the lab might lose its distinction as different from a traditional meeting space or classroom usually associated with training facilities. Consistent with what staff from other labs told us, OPM officials said they needed the past two years to first introduce agency staff to human- centered design concepts and applications before they could initiate an immersion project. Lab staff said this phased approach was necessary for several reasons. Targeted design support sessions allowed lab staff to expose lab users to design methods and provided opportunities for collaboration. These sessions also allowed lab staff to quickly show value for program offices in response to a specific need. For example, OPM lab staff members were able to help FDA staff plan and engage with over 200 different stakeholders at a conference. They also said targeted design support is a critical way for emerging design practitioners to develop and hone their own skills before applying them to a longer term, and higher stakes, project engagement. Lab staff said overall these sessions benefited both the users of the lab, who developed new skills to take to their home offices, such as problem framing and engaging with stakeholders, as well as lab employees, who continue to grow and refine their human-centered design skills. In a December 2013 document, OPM staff stated they intend to create and establish these longer-term immersion projects and evaluate their impact during the next phase of the lab’s development. Measuring the long-term outcomes of innovation labs is a prevalent practice for building acceptance and demonstrating the value of the labs. Consistent with our literature review, several representatives we interviewed from other innovation labs concurred with the director of innovation at Denmark’s innovation lab, MindLab, who said that innovation labs need to know how much they are spending and their outcomes. According to the director, the labs must also be able to attribute where the change happens based on their work. In addition, lab staff must be prepared to present a narrative of their work. He acknowledged that innovation labs are risky because they look different, and they have a different focus than other government entities. The director said that, as a result, innovation lab officials need to show where the funds are going along with the benefits and results of those investments. Representatives from newer labs—i.e. those operating less than three years—stated they primarily rely on output measures to gauge their initial efforts such as number of users, ways in which the lab is being used, classes or events held in the lab, and anecdotal evidence. Developing outcome measures is more challenging for several reasons. Appropriate outcome measures are often not obvious at the onset of a project. Moreover, agencies may not have appropriate measures or baseline data when they start using an innovation lab as a problem-solving tool, and the role of the lab in driving a successful innovation may not always be clear. Given these challenges to accurately measuring innovation and the value of an innovation lab, lab managers from labs that have been operating for a longer period of time told us they focus on developing meaningful milestones and measures applicable to different phases of the innovation lifecycle, such as problem generation, idea generation, and skills development. For example, the UNICEF Innovation Unit—which has been helping member-country offices set up innovation labs since 2006—and several European initiatives developed a set of benchmarks intended to help them measure the value of public sector labs and identify ways in which the lab’s performance can be improved. The benchmarks UNICEF developed span across six categories, such as problem definition and idea generation, internal and external collaboration, and secondary effects. Within each category, they include a list of questions intended to assess their strengths and weaknesses. For example, they want to know whether labs are helping employees define problems and generate ideas, strengthen internal collaborations, and build external partnerships. They also measure the extent to which work done in the labs results in new team or staff capacity, excitement and goodwill toward the organization, and an increase in leverage and influence in their field. OPM is undertaking a similar effort to establish benchmarks that will help lab staff gauge the extent to which lab users are learning and applying many of these same skills, but the lab is not mature enough to have results. OPM documents state that the goal of OPM’s innovation lab is to provide federal workers with 21st century skills in design-led innovation, and the intended purpose of the lab is to provide a physical space for project- based problem solving. The documents also note that the value of the lab can be measured, in part, by how well it helps develop the mission-critical competencies to improve the federal workforce’s ability to solve problems and deliver results. In its strategy document, OPM laid out the following high-level goals for the innovation lab: Employees assigned to the innovation lab should go back to their home organization with an understanding of, and an appreciation for, the power of innovative approaches to problem solving. Employees should be equipped to implement similar methodologies in their home organizations on future projects. As the innovation lab matures, and as more and more projects are completed, the notion of using innovation to tackle complex problems will gain traction across the organization. Eventually, leaders and employees across OPM will vie to get their issues sent to the innovation lab for resolution. This in turn will contribute to a decrease in organizational silos, and a concurrent increase in cross-organizational teams addressing one organization’s issues. In the same document, OPM officials also described an evaluation strategy resembling an agile approach. Specifically, OPM described these goals as moving targets which would be achieved through an evolving and self-correcting process. Lab staff immediately started to track lab activities and outputs, such as number of participating people and agencies, and how participants used the lab, such as consultative sessions, follow-on coaching, training classes, or as a meeting space. Five months after the lab opened, they also started to survey users who participated in day-long facilitated sessions. For example, there was a one-page evaluation that asked respondents to rate the appropriateness of the environment and quality of the facilitators. The surveys also asked whether users would recommend the lab to colleagues and whether human-centered design problem-solving tools can be used as an effective tool government-wide. The responses were generally positive—about 82 percent of respondents (84 out of 103) said they would recommend the lab to someone else, providing a baseline for subsequent survey findings. According to lab staff, they periodically reviewed the available data and adjusted their strategy for operating the lab. Starting in March 2013, a year after the lab opened, OPM lab staff began work on a program evaluation framework to more systematically measure the lab’s progress toward meeting their overarching goals. To evaluate the extent to which lab participants are learning and applying innovative approaches, lab staff intends to measure the lab’s performance along three overarching categories: service experience, skill development, and project outcome. According to the framework, resources dedicated to evaluation efforts will reflect the resources needed to host lab-sponsored events. Episodic events such as consulting sessions will correspond to a “light-touch” follow-up effort, such as immediately surveying all participants on their session experience and skills development. More long-term, resource-intensive efforts such as immersion projects will employ a more robust follow-up effort that, in addition to assessing the session experience and skills development, will also address project- specific outcomes. Collection of assessment data in all three areas will include the administration of surveys to participants both before and right after a session, and some services will involve the administration of surveys to participants before a service and subsequent periodic check- ins. Depending on the nature of the lab session, information on skill development and outcomes will also be obtained from session clients in pre-session scoping conversations and periodic, post-session check-ins using either surveys, or interviews. For one type of session, assessment of participant skill development will also include a survey of participant supervisors. Lab staff has used a series of surveys to measure participant experience and skills development, and to capture specific project-related outcomes for the different services they offer. However, the survey instruments are unlikely to yield data that would be of sufficient capacity, credibility, and relevance to indicate the nature and extent to which the lab is achieving what it intends to accomplish or its value to those who use the lab space. Although there are several items across all surveys that are reasonably aligned with generally accepted questionnaire and item design principles, there are limitations associated with many items where language is ambiguous, where the intent of the question is not clear, and directions are lacking. For example, phrases such as “changed behavior” or “tangible outputs you can move forward” are open to numerous interpretations and are likely to engender an array of responses that range from being relevant to not at all relevant or relatable to the purposes or objectives of the session. In addition, some of the items may be more likely to engender responses with a greater likelihood of being subject to a respondent’s social desirability bias. For example, the respondent may want to provide answers that are socially desirable, maintain the status quo, or make a good impression. While some customization is to be expected, the surveys did not indicate any approach to evaluate some core aspects of the lab and its value using a consistently presented set of the same questions. For example, the question asking participants about the likelihood that they would recommend the lab to someone else is the type of item that could, with revision, be incorporated in all of the surveys. Analyses of a core set of items by type of lab event or service would enable lab staff to discern and compare where participants were more and less engaged in lab activities and curricula. Consequently, these survey instruments and the items on them may be susceptible to various types of question and respondent bias and could, when the responses are analyzed, produce results that would be difficult to interpret or link to expected participant effects, or to the intent or activities of the workshop session. Moreover, lab staff has not developed outcome measures or milestones related to customer experience and skills development. The evaluation framework being developed by OPM does not include interim performance targets or measures. Best practices state that new initiatives benefit when managers set time-bounded, quantifiable interim goals, establish related performance measures, collect data, and use that information to assess and adjust their performance. To evaluate the overall performance of MindLab, the director said he develops an annual work plan, which describes the number and types of projects and other activities the lab will undertake, as well as the relative resource allocation to those projects and activities. He said his staff also conducts an annual review of the budget and actual expenditures with the board. OPM lab staff has been tracking outputs—such as number of participants and number and type of activities—meaning that they have baseline data which could inform realistic, meaningful targets and measures related to lab use and activities for the upcoming year. Although they continue to refine their surveys, they could use the results from earlier versions to establish targets and measures related to customer experience and skills development. Meaningful measures or milestones could help them assess their progress toward improving participants’ ability to solve problems and accurately measure the effect of working in the lab on services, products, and processes. As mentioned previously, the lab plans to host the more resource- intensive immersion projects. To demonstrate that the lab is operating as originally intended, evaluation plans will be needed for specific immersion projects that can help track cost-benefits and performance improvement outcomes. OPM stated that evaluation plans will be prepared for each immersion project to account for project outcomes. They indicated that they wanted to host their first immersion project within the next several months. Another prevalent practice we identified included leveraging other innovation labs’ efforts to try to increase the value of the lab approach. Studies show that information sharing and interorganizational networks can be a powerful driver supporting innovation. One study showed that interorganizational networks of innovators help members develop new products at a faster rate with lower investment commitments, due in large part to the information sharing that takes place. Sharing information can help mitigate the risks and uncertainty that typically characterize innovation ventures. Best practices state the importance of establishing channels of communication and other mechanisms that facilitate knowledge-sharing and building networks of like-minded communities to help agencies achieve crosscutting objectives. For example, the Census Bureau’s Chief Technology Officer suggested a way in which innovation leaders could share information and pool resources. Specifically, instead of each agency creating its own technology innovation lab with its own hardware, software, and associated maintenance, they could use a common innovation infrastructure service in the public cloud. Every agency could still have their own branded offering and could still provide access in their own facility or at their regional offices. However, an outside vendor could provide the infrastructure. For example, if an agency wanted to experiment with some unique visual analytic tools, they could purchase what they need on a subscription service; this would eliminate agencies buying all the tools themselves. While labs provide a physical space where innovators can convene, federal agencies are not fully aware of their growing community. As of June 2013, OPM was unaware that other agencies such as Census, HUD, and NASA were pursuing a lab approach to promote innovation. Moreover, the lab directors at these agencies were not aware or only marginally aware of OPM’s lab and its resources or other federal innovation labs. OPM’s efforts to develop an innovation lab occurred around the same time or pre-dated those of other agencies we interviewed. According to OPM officials, during its first year of operations, OPM lab staff focused their efforts on promoting awareness of the lab and its resources internally to OPM staff. In their second year, OPM lab staff planned more activities intended to promote the lab and its resources externally to connect with federal agencies’ innovation efforts. Staff noted the OPM lab is the hub of various interagency networks of innovation practitioners. For example, an interagency community of practice on idea generation meets in the lab on a monthly basis. OPM’s lab staff also reported that they host weekly trainings in the lab on best practices, including webinars about measuring the success of enterprise-level design efforts and the value of visualizing information. These training sessions include case study presentations from other federal agencies, such as GSA, and non-federal entities. In addition, OPM has shared best practices with other public sector design labs across the globe by participating in a number of conferences. OPM is also collaborating with a current Presidential Innovation Fellow, who is building an innovation toolkit. Although projects in the lab are currently managed and for the most part delivered by OPM employees, staff noted that they are increasingly looking to leverage detailees, short-term assignments, and other ways to harness the potential of talent from other agencies. OPM staff said they also give regular tours of their innovation lab for other government entities already supporting innovation initiatives or developing them. In addition to these activities, OPM hosted a convening of federal innovators to compare various agencies’ innovation communication efforts across the agencies. Several federal officials we interviewed said they would welcome the opportunity to communicate as the need arose with a community of peers to exchange information and ideas and trouble-shoot problems related to the start-up and maintenance of their labs. For example, CFPB’s Creative Director for Technology and Innovation said it would be helpful to find out what other bureaus and departments are doing to incorporate design principles, so that she could exchange ideas and information. An official from NASA’s Swamp Works noted that it would be beneficial to show that others in the federal sector are also looking at innovation labs. To that end, simply knowing the innovation community exists and how agency staff leading innovation efforts can initiate a conversation related to a specific topic would likely be beneficial and would help avoid the risk of a fragmented innovation community. Because innovation necessarily entails culture change, experimentation, periodic setbacks, and often resource investments, another prevalent practice necessary to sustain a lab includes leadership support. Innovation labs are one tool that agencies can use to foster innovation. Agency officials and lab directors we interviewed said leaders must be willing to embrace experimentation within the lab and understand that smart failures—failures that result from trial and error, where the alternative would be to do something truly risky due to lack of evidence— are part of the design process. For example, the Census Bureau’s Chief Technology Officer noted that support by Census Bureau leadership is critical to ensure staff participation and the continued availability of funds to drive innovation in its Center for Applied Technology lab. Other lab directors highlighted several strategies they use to balance the risks and failures that accompany a problem-solving methodology rooted in a more experimental approach. These include accelerated timelines of three to six months, which allows organizations to quickly shelve projects that are without merit. Some lab leaders also noted that a quick win or early success can give new labs the underlying support they need to take on riskier projects. In March 2014, OPM released its 2014 through 2018 strategic plan, which states that the agency plans to seek new, innovative ways to accomplish its work of advancing human resource management in the federal government. In the strategic plan, OPM indicates that, among other things, it intends to use the innovation lab and human-centered design methods to address OPM’s operational challenges. For OPM and the rest of the federal government, finding more efficient and effective ways of doing business to help meet rising citizen demands for public services is critical, particularly in an era of continued fiscal and budgetary constraints. OPM’s innovation lab is one such tool intended to give rise to solutions of complex problems facing the federal government. Consistent with other innovation labs, development of performance and outcome measures, tools to assess performance, and further leveraging the experience of other organizations undertaking similar efforts will also be critical. Having clear and specific outcome measures will help OPM track and evaluate the extent to which the lab is meeting its original intent and over time, to make any necessary adjustments. Otherwise, OPM’s innovation efforts may not be able to demonstrate the types of results initially envisioned. We recommend that the Director of OPM take the following actions to help substantiate the lab’s original goals of enhancing skills in innovation and supporting project-based problem solving: Direct lab staff to develop a mix of performance targets and measures to help them monitor and report on their progress toward lab goals. Output targets could include number and type of lab activities over the next year. Outcome targets and measures should correspond to the lab’s overarching goals to build organizational capacity to innovate and achieve specific innovations in concrete operational challenges. Direct lab staff to review and refine the set of survey instruments to ensure that taken as a whole, they will yield data of sufficient credibility and relevance to indicate the nature and extent to which the lab is achieving what it intends to accomplish or is demonstrating its value to those who use the lab space. For example, lab staff should consider the following actions: Developing a standard set of questions across all service offerings. Revising the format and wording of existing questions related to skills development to diminish the likelihood of social desirability bias and use post-session questions that ask, in a straight- forward way, about whether, or the extent to which, new information was acquired. Replacing words or phrases that are ambiguous or vague with defined or relevant terminology (e.g., terms actually used in the session) so that the respondent can easily recognize a link between what is being asked and the content of the session. Direct lab staff to build on existing efforts to share information and knowledge within the federal innovation community. For example, OPM lab staff could reach out to other agencies with labs such as Census, HUD, and NASA’s Kennedy Space Center to share best practices and develop a credible evaluation framework. We provided a draft of this report to the Director of OPM for review and comment. The director provided written comments, which we have reprinted in appendix IV. In summary, OPM generally concurred with our recommendations and described ongoing and planned steps to refine evaluation efforts and further leverage other federal innovation labs. For the recommendations on evaluating performance, the director described a competency-based skills gap pilot the lab is undertaking, based on targets from pre- and post-testing of participants in lab activities. We acknowledge that this is an important step in developing performance measures, and OPM will also need targets and measures to demonstrate the lab’s value in achieving specific innovations in concrete operational challenges. For the recommendation on leveraging other federal agency innovation efforts, the director noted OPM’s work seeking out information and contacts from other innovation endeavors, including lab-based ones. We acknowledge OPM’s more recent emphasis in this area, including participating in an interagency community of practice on innovation. Federal officials we interviewed said they would welcome the opportunity to communicate as the need arose with a community of peers. To clarify, the report recognizes sustained organizational leadership as a prevalent practice for the success of innovation labs. However, this was not a specific report recommendation, but an acknowledgement regarding the general role leadership plays in ensuring the success of innovation labs. In her response, the director stated that OPM recently released its 2014 through 2018 strategic plan, which she said demonstrates OPM leadership’s commitment to the advancement of work in the lab. Accordingly, we updated the report to reflect the most current information available at the time of our publication. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director of OPM and appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4749 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. This appendix provides information on the scope of work and the methodology used to (1) describe the Office of Personnel Management (OPM) innovation lab’s start-up and operating costs, staffing and organization, activities, and policies governing the lab’s use, and (2) assess how OPM’s innovation lab compares to other organizations’ innovation labs, including how it uses benchmarks and associated metrics and how it addresses potential challenges to innovation. To address the first objective, we reviewed documentation and met several times with OPM staff overseeing the lab and its activities. We reviewed the lab’s construction and operations’ budget, including the funding sources for the lab, and interviewed agency officials knowledgeable about the lab’s budget. Based on interviews and e-mail exchanges with knowledgeable OPM and General Services Administration staff and reviewed documents, we found OPM’s lab expense data to be sufficiently reliable for the purposes of our report. We reviewed spreadsheets maintained by lab staff tracking lab outputs, such as workshops hosted in the lab and number of attendees. We also reviewed lab performance materials such as the lab’s performance plan, user surveys, and the results of those surveys. Survey specialists in our Center for Design, Methods, and Analysis reviewed the lab user surveys using internal review guidance that is typically performed on draft GAO surveys as part of our development process and required before deployment of a survey. In addition to reviewing with lab staff the documents they provided us, we interviewed them about OPM’s process for identifying and selecting a lab strategy, lab staff’s approach to implementing a human-centered design curriculum, and their goals for the lab. To address the second objective, we conducted a detailed literature search of material from academic institutions, global management consultants, professional associations, think tanks, news outlets, and various other organizations. We also reviewed literature documenting public, private, and academic innovation efforts and associated positive and negative outcomes. Our literature search helped us identify benchmarks and associated metrics applicable to the development and use of innovation facilities in the public, private, and nonprofit sectors. We also interviewed OPM lab staff on how they intend to identify outcomes— such as cost reductions, performance improvements, or other results— from projects undertaken by OPM since the inception of the lab. We used the findings from our literature review to identify organizations with innovation facilities having a dedicated physical space and using problem-solving methods similar to OPM’s lab. We selected a mix of 11 public, nonprofit, and private organizations to visit or interview. In addition, we met with an official from the Consumer Financial Protection Bureau (CFPB). While CFPB lacks a dedicated innovation lab, the agency has a reputation among federal agencies as a leader in innovative website development. Table 4 lists the organizations we visited in person or interviewed their representatives by telephone. At every lab we visited or contacted, we interviewed lab representatives about the history of the lab, including why they decided to pursue a lab strategy; how the lab is used; the protocols for engaging participants; how lab directors measure the performance of the lab; challenges to promoting innovation within the organization; and practices for addressing those challenges. Based on our literature search, we identified common challenges that can hamper organizations’ efforts to use labs as innovation vehicles and prevalent practices that can support labs’ success and sustainability. In addition, we reviewed our interview records to identify commonly recurring challenges and prevalent practices that can support labs’ success and sustainability. We verified that the challenges and prevalent practices we identified during our literature search were also those more often cited during the interviews. We also interviewed representatives from two management consultancies that promote problem-solving approaches rooted in design-thinking principles, IDEO and Luma. OPM contracted with Luma to help design the lab and implement human-centered design programming. We spoke with their representatives to understand the challenges their clients face in changing organizational culture and the benchmarks and metrics they advise their clients to adopt to measure the performance of new labs and problem-solving methods. In addition, we interviewed officials from three public-sector organizations—San Francisco Mayor’s Office of Civic Innovation, Canada’s Public Policy Forum, and United Kingdom Behavioural Insights Team—that are pursuing strategies to promote innovation in their organizations but opted not to build innovation labs. They spoke to us about the challenges that prevented them from building labs and the steps they are taking to incorporate human-centered design-like problem- solving methods without a physical lab. We conducted this performance audit from July 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This table contains the same text portrayed in figure 2 and shows how innovation labs we surveyed from the public, private, and nonprofit sectors generally use their labs for multiple and similar purposes. Purpose of lab To help deliver on President Obama’s vision of a more effective and efficient government for the American people by supporting a government-wide community of innovators. How is it used Dedicated space where OPM and interagency teams can take a problem through a full design cycle. This includes problem framing, learning about users, analysis, concept development, testing, and rapid iteration. Capacity building—lab hosts classes and workshops. Meeting space for OPM staff and other federal workers and interagency communities of practice. To provide a “safe zone” where Census staff can explore new technology solutions without impact to production operations. Dedicated space where lab and other Census staff can develop, test, and implement ideas into production outside the standard production environment Capacity building—lab hosts presentations of new technologies and solutions. To provide a dedicated space where NASA KSC engineers can quickly resolve problems related to deep-space exploration. Space where 20 NASA engineers and scientists prototype and test emerging technologies. Swamp Works has contracts with other NASA centers worth about $7 million a year. Provide a space that looks different from the traditional office environment, where HUD employees can accelerate the development of solutions more efficiently than other available approaches. Dedicated space where HUD lab staff and mission area leads can develop, test, and implement ideas into production within a compressed timeframe. Provide a neutral space for government ministries to work with citizens and businesses to create new solutions for society. Ministry officials from Danish Ministries of Business and Growth, Education, and Employment use the MindLab space and resources to take a problem through the full design cycle. Capacity building—lab hosts classes, conferences, and workshops. Nonprofit Help the organization become more flexible, agile, and better prepared for global changes by providing spaces where 135 country offices can collaborate with local partners. Dedicated spaces where UNICEF country office staff and their local partners can take a problem through a full design cycle. Sector Nonprofit Provides space, practical skill building, and programming for Harvard students, faculty, staff, alumni, and others engaged in new ventures, nonprofit creation, product or service innovation, small business development, and related educational and research activities. How is it used Incubator, workspace, and programming for start-up ventures involving Harvard students, and their partners. Teaching space for Harvard students. Meet-up space for local community. Nonprofit To tackle tough social issues, such as family breakdown and social inequality, by building Australia’s social innovation capability. Laboratory for co-designing new social programs for vulnerable populations—this includes generating ideas, conducting ethnographic research, and prototyping potential solutions. Meet-up space: TACSI hosts events, workshops, and conferences for social change community. Capacity building for other social innovators. Provide a dedicated physical space in the heart of the Cambridge, MA tech sector that can accelerate the speed of product development, increase collaboration, and attract new talent. Office space for moving new and emerging technologies through the development pipeline. Meet-up space for local tech community. Recruitment tool for top tech talent. To build a strong and permanent research and development presence in Cambridge, MA where Microsoft researchers and programmers can build relationships with local universities, biotech, and healthcare companies. Office space for moving new and emerging technologies through the development pipeline. Meet-up space for local tech community. Recruitment tool for top tech talent. Provide a dedicated physical space where Fidelity executives explore how emerging technologies can improve products and processes for internal business units. Laboratory where FCAT staff can identify solutions and develop new products and processes for Fidelity business units. Hosts conferences, workshops, and social events related to technological and social innovation. Provide a dedicated physical space that exemplifies how environment can promote creativity and collaboration where Deloitte’s future leaders can hypothesize, research, and test new ideas. Leadership development institute for Deloitte’s highest performing consultants A think tank where fellows develop innovative yet practical strategies governments can use to transform the way they deliver their services and prepare for the challenges ahead Capacity building—GovLab educates Deloitte account teams on these emerging trends. Number of participants per meeting (ranges) Seto J. Bagdoyan, (202) 512-4749 or bagdoyans@gao.gov. In addition to the contact named above, Thomas Gilbert, Assistant Director, and Judith Kordahl, Analyst-in-Charge, supervised the development of this report. Jessica Nierenberg and Anthony Patterson made significant contributions to all aspects of this report. Other important contributors included Thomas Beall, Karin Fangman, Donna Miller, and Robert Robinson.
Organizations from around the globe are emphasizing that strategies promoting innovation are vital to solving complex problems. To try to instill a culture of innovation in its agency, OPM followed the lead of a number of private sector companies, nonprofit organizations, and government bodies by creating an innovation lab. GAO was asked to examine the lab. Specifically, GAO 1) described the lab's start-up costs, staffing and organization, activities, and policies governing the lab's use, and 2) assessed how OPM's innovation lab compares to other organizations' innovation labs, including how it uses benchmarks and metrics and how it addresses challenges to innovation. GAO reviewed cost, staffing, and performance information. GAO also reviewed relevant literature on innovation and interviewed officials from public, private, and nonprofit organizations with innovation facilities similar to OPM's lab. In March 2012, the Office of Personnel Management (OPM) opened its innovation lab, a distinct physical space with a set of policies for engaging people and using technology in problem solving. The goals of OPM's innovation lab are to provide federal workers with 21st century skills in design-led innovation, such as intelligent risk-taking to develop new services, products, and processes. OPM's lab was built at a reported cost of $1.1 million, including facility upgrades and construction, equipment and training, and other personnel costs. The lab employs approximately 6 full-time equivalents, including a director, and in fiscal year 2013, the lab's operating costs were approximately $476,000, including salaries. OPM's innovation lab is similar in mission and design to other innovation labs GAO reviewed, and OPM has incorporated some of the prevalent practices that other labs use to sustain their operations. Specifically, OPM is using its lab for a variety of projects, including as a classroom for building the capacity to innovate in the federal government. Lab staff indicated that they plan to begin long-term immersion projects—complex projects with diverse users—within a few months. OPM plans to develop and implement evaluation plans specific to each immersion project that will help them track cost benefits or performance improvement benefits associated with the projects. Starting in March 2013, OPM lab staff began work on a program evaluation framework to more systematically measure the lab's progress toward meeting its overarching goals. In addition, lab staff members are tracking lab activities, such as classes and workshops, and are surveying lab users about the quality of their experience in the lab. However, they have not developed performance targets or measures related to project outcomes, and without a rigorous evaluation framework that can help OPM track the lab's performance, it will be hard to demonstrate that the lab is operating as originally envisioned. While labs provide a physical space where innovators can convene, federal agencies are not fully aware of their growing community. However, OPM is taking steps to ensure work done in the lab is shared across OPM and with other federal innovators—for example, by hosting weekly training sessions in the lab on best practices. Studies show that information sharing and interorganizational networks can be a powerful driver supporting innovation. Among other things, GAO recommends that the Director of OPM should direct lab staff to 1) develop a mix of performance targets and measures to help them monitor and report on progress toward lab goals, and 2) build on existing efforts to share information with other agencies that have innovation labs. OPM generally concurred with GAO's recommendations; in addition, they described the steps being taken and planned to refine their ongoing evaluation efforts and to further leverage other federal innovation labs.
Foreign nationals who wish to come to the United States on a temporary basis and are not citizens of countries that participate in the Visa Waiver Program must generally obtain an NIV. U.S. law provides for the temporary admission of various categories of foreign nationals, who are known as nonimmigrants. Nonimmigrants include a wide range of visitors, such as tourists, foreign students, diplomats, and temporary workers who are admitted for a designated period of time and a specific purpose. There are dozens of specific types of NIVs that nonimmigrants can obtain for tourism, business, student, temporary worker, and other purposes. State manages the application process for these visas, as well as the consular officer corps and its functions, at over 220 visa-issuing posts overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews; collection of biometrics (fingerprints and full-face photographs); cross-referencing an applicant’s name and biometrics against multiple databases maintained by the U.S. government; and in-person interviews. Personal interviews with consular officers are required by law for most foreign nationals seeking NIVs. For an overview of the visa process, see figure 1. DHS sets visa policy, in consultation with State, and Commerce oversees the creation and implementation of strategies to promote tourism in the United States, such as the National Travel and Tourism Strategy called for in E.O. 13597. We have previously reported on visa delays at overseas posts: In April 2006, we testified that, of nine posts with wait times in excess of 90 days in February 2006, six were in Brazil, India, and Mexico. In July 2007, we reported that 20 posts said they experienced maximum monthly wait times in excess of 90 days at least once over the past year. More recently, State has reported long interview wait times in Brazil and China. For example, in June 2010, NIV interview wait times reached 100 days at the U.S. Embassy in Beijing, China, and in August 2011, interview wait times reached 143 days at the U.S. Consulate in Rio de Janeiro, Brazil. Following the rise of interview wait times at many posts, and especially in Brazil and China, President Obama issued E.O. 13597 in January 2012 to improve visa processing and travel promotion while continuing to protect U.S. national security. E.O. 13597 contained multiple goals for State and DHS for processing visitors to the United States, including the following: Ensure that 80 percent of NIV applicants worldwide are interviewed within 3 weeks of receipt of application. Increase NIV processing capacity in Brazil and China by 40 percent over the next year. In March 2012, State and DHS released an implementation plan for E.O. 13597 that outlined the measures each agency planned to undertake to meet the goals of the Executive Order. Subsequently, in August 2012, State and DHS issued a progress report on E.O. 13597 stating the progress made in meeting the goals of the Executive Order and the plans for continued efforts to improve a foreign visitor’s experience in traveling to the United States. State’s Bureau of Consular Affairs, as well as consular management officials and consular officers at the four posts we visited, reported that increased staffing levels, policy changes, and organizational reforms implemented since 2012 have all contributed to increasing NIV processing capacity, reducing NIV interview wait times worldwide. For calculating NIV interview wait times, we used data from State on applications for visas for tourism and business purposes (B visas) and did not include other NIV categories. According to State’s Bureau of Consular Affairs, the past hiring of additional staff through various authorities and temporary assignments of consular officers during periods of high NIV demand contributed to meeting E.O. 13597’s goals of expanding NIV processing capacity and reducing worldwide wait times, particularly at U.S. posts in Brazil, China, India, and Mexico. Increase in consular officers: According to State officials, from fiscal year 2012 through 2014, State “surged” the number of consular officers deployed worldwide from 1,636 to 1,883 to help address increasing demand for NIVs, an increase of 15 percent over 3 years. In response to E.O. 13597, State increased the number of deployed consular officers between January 19, 2012 (the date of E.O. 13597), and January 19, 2013, from 50 to 111 in Brazil, and 103 to 150 in China, a 122 and 46 percent increase, respectively (see fig. 2 for additional information on consular staffing increases in Brazil and China). As a result, State met its goal of increasing its NIV processing capacity in Brazil and China by 40 percent within a year of the issuance of E.O. 13597. Limited noncareer appointments: In fiscal year 2012, State’s Bureau of Consular Affairs launched the limited noncareer appointment (LNA) pilot program to quickly deploy language-qualified staff to posts facing an increase in NIV demand and workload. The first cohort of LNAs—who are hired on a temporary basis for up to 5 years for specific, time-bound purposes—included 19 Portuguese speakers for Brazil and 24 Mandarin speakers for China who were part of the increased number of consular officers deployed to posts noted above. In fiscal year 2013, State expanded the LNA program to include Spanish speakers. As of August 2015, State had hired 95 LNAs for Brazil, China, Colombia, the Dominican Republic, Ecuador, and Mexico. Temporary assignment of consular officers: State utilizes the temporary redeployment of Foreign Service officers and LNAs to address staffing gaps and increases in NIV demand. Between October 2011 and July 2012, State assigned, on temporary duty, 220 consular officers to Brazil and 48 consular officers to China as part of its effort to reallocate resources to posts experiencing high NIV demand. State continues to use this method to respond to increases in NIV demand. For example, during the first quarter of fiscal year 2015, India experienced a surge in NIV demand that pushed NIV interview wait times over 21 days at three posts. To alleviate the situation, consular managers in India sent officers to the U.S. Consulate in Mumbai, which was experiencing higher wait times, from other posts, allowing the U.S. Mission in India to reduce average wait times to approximately 10 days by the end of December 2014. According to State officials, policy changes have also helped to reduce NIV interview wait times at posts, including the expansion of the Interview Waiver Program (IWP) for NIVs and extending the validity of some NIVs. Expansion of interview waiver program: The IWP allows posts to waive the in-person NIV interview requirements for defined categories of “low-risk” applicants or applicants renewing an NIV for some visa categories. In 2012, the IWP for the U.S. Mission in Brazil was expanded to include first time applicants under the age of 16 or over the age of 66. This expansion allowed the U.S. Mission in Brazil to conduct additional walk-in NIV interviews by diverting first-time NIV applicants that State considers to be low-risk and renewals from presenting themselves at post for an interview. According to State officials, discussions with DHS are underway to further expand the IWP. Extending the validity period of visas: In accordance with federal law, State has extended the validity period of some visas in some countries, reducing the frequency with which a holder of a U.S. NIV would be required to apply for a renewal. (The visa validity period is the length of time the holder of a U.S. NIV is permitted to travel to a port of entry in the United States.) In November 2014, the United States and the People’s Republic of China reciprocally increased the validity periods of multiple-entry business and tourist visas issued to each other’s citizens for up to 10 years. The change in policy was intended to support improved trade, investment, and business by facilitating travel between the two countries. Furthermore, the extension of visa validity periods, according to State officials, is also expected to reduce the number of visas requiring adjudication over the long term at posts in China. State’s Bureau of Consular Affairs has adopted several organizational reforms to improve its NIV processing efficiency. These include contracting out some administrative support duties, establishing leadership and management practices to better guide consular officers, and opening additional consulates to expand NIV processing capacity in certain countries and redesigning consular sections at post. Contracting for administrative support duties: The use of a worldwide support services contract has enabled posts to outsource certain administrative activities related to visa processing that would otherwise be handled by consular personnel. This effort, according to State officials, allows consular officers more time to focus on visa adjudication and therefore improves their productivity. The contract provides support services for visa operations at U.S. embassies and consulates, including NIV interview appointment scheduling and fee collection services. Contractors have opened 29 off-site locations in six countries to collect biometric data of NIV applicants, which are then forwarded to the post for processing and security screening prior to an applicant’s scheduled interview. Before the implementation of the contract in fiscal year 2011, biometric information could be collected at the post only when the applicant appeared for his or her interview. Consular officials we spoke with in Brazil and India stated that off-site biometric collection has added additional efficiencies to the NIV process. Leadership and management changes: In 2012, State’s Bureau of Consular Affairs launched the 1CA office to help further develop a culture of leadership, management, and innovation under budget austerity and increasing NIV demand. In three of the four posts we visited, embassy officials told us that 1CA tools and resources have helped management at post identify and develop solutions to delays in NIV processing, which they said has contributed to the ability of State to reduce NIV interview wait times. For example, the U.S. Embassy in Mexico City is using 1CA to map out NIV processing steps to identify and develop solutions to existing bottlenecks. According to consular managers at post, the process maps allow managers to graphically view the various NIV processing steps and identify where improvements can be implemented. The solutions developed from the 1CA mapping exercise have allowed the post to conduct a larger number of NIV interviews each day. In addition, the 1CA office is in the process of developing meaningful metrics, beyond NIV interview wait times, to provide consular managers with the data to improve performance. Opening additional consulates and redesigning consular sections: Since the issuance of E.O. 13597, State has expanded the number of interview windows at posts in Brazil and China and developed plans to open two additional consulates in Brazil and add visa services to the existing U.S. consulate in Wuhan, China, to help absorb increases in NIV demand. Additionally, at all four posts we visited, State officials told us that they have, to varying degrees, redesigned the responsibilities and location of their consular staff to improve the efficiency of their operations. For example, in China, India, and Mexico, officials reported that they have individualized the tasks that are performed at each interview window to reduce the time an applicant spends at post and streamline NIV processing. Additionally, at the U.S. Embassy in Beijing, each interview window within the consular section is assigned to conduct a discrete task in the NIV adjudication process. These tasks include checking-in and confirming an applicant’s identity, collecting biometric data, and adjudicating NIVs at separate windows (see fig. 3 for a photograph of the NIV applicant area at the U.S. Embassy in Beijing, China). Transfer of NIV adjudications: State has redistributed IWP adjudications within the same country to posts experiencing low NIV demand and has created an IWP adjudication section in the United States to better leverage NIV processing resources. Several missions we visited transfer IWP adjudications from a post experiencing high demand to a post experiencing low demand. For example, from February 2014 to April 2015, consular managers in the U.S. Mission in Mexico electronically transferred 44,240 IWP cases from the U.S. Consulate in Guadalajara to the U.S. Consulates in Ciudad Juarez, Matamoros, and Nogales. According to officials, the electronic transfer of the IWP adjudications allowed the U.S. Consulate in Guadalajara to keep NIV interview wait times under 21 days. Additionally, in May 2015, State’s Bureau of Consular Affairs created an IWP remote processing unit in the United States to support the U.S. Mission in China. According to State officials, the output of the unit is currently over 1,000 IWP cases per day; and when fully staffed with 30 consular officers by December 2015, the unit will be able to process up to 3,000 cases per day. According to State officials, efforts the Bureau of Consular Affairs has implemented since the issuance of E.O. 13597 have reduced NIV interview wait times worldwide, including in Brazil and China. According to State data, even as NIV demand has increased, State has seen NIV interview wait times generally decline. Specifically, as figure 4 shows, since July 2012, at least 80 percent of B visa applicants worldwide have been able to obtain an interview within 3 weeks of their application. This indicates that the goal of E.O. 13597 is, so far, being met. NIV B visa interview wait times have also decreased even as NIV workloads have increased in Brazil and China, two countries that have historically experienced long interview wait times for NIV applicants. For example, B visa interview wait times decreased from an average high of 114 days in August 2011 to 2 days in September 2012 for posts in Brazil, and from an average high of 50 days in June 2011 to 2 days in February 2014 for posts in China (see fig. 5 for additional average wait times at posts in India and Mexico). Between January 2010 and December 2014, State reported that NIV workloads from Brazil and China increased by 161 percent and 88 percent respectively. State projects that the number of NIV applicants will rise worldwide from 12.4 million in fiscal year 2014 to 18.0 million in fiscal year 2019, an increase of 45 percent. Although NIV demand generally fluctuates and undergoes significant increases and decreases from outside factors— such as shifts in the world economy and events like the September 2001 terrorist attacks—the demand is generally trending upwards, and has been for the past 40 years (see fig. 6). According to State’s projections, NIV applications from the East Asia and Pacific region and the South and Central Asia region, will increase by about 98 and 91 percent, respectively, from fiscal year 2014 to fiscal year 2019. The Western Hemisphere region is expected to receive approximately 6.9 million applicants by fiscal year 2019, an increase of approximately 30 percent from fiscal year 2014 (see fig. 7). State has underestimated growth in NIV demand in past projections. In 2005, State contracted with an independent consulting firm to project growth in NIV applicant volume through 2020. As of 2014, 13 of the 18 countries included in this study had exceeded their 2014 NIV demand projections. The study also underestimated the sharp escalation of NIV demand in Brazil and China. By 2014, Brazil’s demand had already exceeded the study’s projection for NIV applicants in 2020 by over 104 percent, and in the same year, China’s demand was over 57 percent higher than the study’s 2020 projection for it. These increases in demand resulted in longer NIV interview wait times between 2006 and 2011 in Brazil and China. As we have previously reported, increases in NIV demand have historically impacted State’s ability to efficiently process visas. Expected increases in NIV demand are further complicated by State’s current NIV process, including proposed staffing levels that are not anticipated to rise significantly through fiscal year 2016. Consular officers in 8 of the 11 focus groups and consular management officials at posts in Beijing, Mexico City, and New Delhi told us that current efforts to reduce NIV interview wait times are not sustainable if demand for NIVs continues to increase at expected rates. A consular management official at one post noted that efforts such as staff increases have been a “temporary fix” but are not a long-term solution to their high volume of NIV applicants. Staffing levels cannot be increased indefinitely due to factors such as hiring restrictions, staffing limitations established by host governments, and physical workspace constraints. For example, according to State officials, State is currently hiring to meet vacancies caused by attrition and is expected to increase the number of consular officers by only 57 in fiscal year 2015, a 3 percent increase, and not increase consular officers in fiscal year 2016. State officials told us that they do not expect significant increases in staffing levels beyond 2016. According to State officials, staffing limitations established by host governments are also a barrier to State’s Bureau of Consular Affairs’ staffing efforts. For example, the Indian government has currently restricted the number of staff the United States can employ at consulates and embassies. Physical capacity limitations, such as insufficient interview windows for visa adjudication, are also a concern for efforts to increase staffing. According to State officials, efforts implemented since E.O. 13597 have collectively reduced NIV interview wait times. However, the effectiveness of each individual effort remains unclear due to a lack of evaluation. According to GAO’s Standards for Internal Control in the Federal Government, internal controls should provide reasonable assurance that the objectives of an agency are being achieved to ensure the effectiveness and efficiency of operations, including the use of the agency’s resources. Furthermore, State’s evaluation policy emphasizes the importance of evaluations for bureaus to improve their programs and management processes to inform decision makers about current and future activities. The evaluation findings, according to State’s policy, are to then be utilized for making decisions about policy and the delivery of services. State officials acknowledged that they had not completed any systematic evaluations of their efforts to reduce NIV interview wait times because they are not currently collecting reliable data. For example, State officials reported that the expansion of the IWP in Brazil has significantly increased their NIV processing capacity and has helped them reach the NIV interview wait times goals of E.O. 13597. However, due to an absence of data, State could not determine how many more cases were adjudicated via the IWP after its expansion and also could not quantify the impact of the expansion on reducing NIV interview wait times in Brazil. Instead, State officials said they relied on the reduction in NIV interview appointment wait times as a general indication that the efforts are working. Furthermore, projected increases in NIV demand and the goals specified in E.O. 13597 heighten the importance and potential impact of State’s efforts to ensure that resources are effectively targeted. A systematic evaluation of efforts by State to reduce NIV interview wait times would provide a clear indication of the efforts that yield the greatest impact on NIV processing efficiency and could assist the agency in continuing to meet the goals of E.O. 13597. Such evaluations would help State allocate resources to those efforts that provide the most impact in efficiently and effectively achieving its objectives. Without such evaluations, State’s ability to direct resources to those activities that offer the greatest likelihood of success for continuing to meet the goals of E.O. 13597 is at risk. State officials acknowledged that an evaluation of their efforts to improve NIV processing capacity would be helpful for future decision making. Consular officers and managers at posts we visited identified current information technology (IT) systems as one of the most significant challenges to the efficient processing of NIVs. Consular officers in all 11 focus groups we conducted across the four posts we visited stated that problems with the Consular Consolidated Database (CCD) and the NIV system create significant obstacles for consular officers in the processing of NIVs. Specifically, consular officers and managers at posts stated that frequent NIV system outages and failures (where the system stops working) at individual posts, worldwide system outages of CCD, and IT systems that are not user friendly, negatively affected their ability to process NIVs. NIV system outages and failures at posts: Consular officers we spoke with in Beijing, Mexico City, New Delhi, and São Paulo explained that the NIV system regularly stops working. This results in a reduced number of adjudications (whether being performed at the interview window or, for an IWP applicant, at an officer’s desk) in a day. Notably, consular officers in 4 of the 11 focus groups reported having to stop work or re-adjudicate NIV applications as a result of these NIV system failures. In fact, during our visit to the U.S. Embassy in New Delhi in March 2015, a local NIV outage occurred, affecting consular officers’ ability to conduct adjudications. In January 2015, officers in Bogotá, Guadalajara, Monterrey, and Moscow—among the top 15 posts with the highest NIV applicant volume in 2014— experienced severe NIV performance issues—specifically an inability to perform background check queries against databases. Worldwide outages and operational issues of CCD: Since July 2014, two worldwide outages of CCD have impaired the ability of posts to process NIV applications. On June 9, 2015, an outage affected the ability of posts to run checks of biometric data, thus halting most visa printing along with other services offered at posts. According to State officials, the outage affected every post worldwide for 10 days. The system was gradually repaired, but it was not fully restored at all posts until June 29, 2015, exacerbating already increased NIV interview wait times at some posts during the summer high demand season. According to State notices, another significant outage of CCD occurred on July 20, 2014, slowing NIV processing worldwide until September 5, 2014, when CCD returned to full operational capacity. State estimated that from the start of operational issues on July 20 through late July, State issued approximately 220,000 NIVs globally— about half of the NIVs State anticipated issuing during that period. According to officials in State’s Bureau of Consular Affairs, Office of Consular Systems and Technology (CST), who are responsible for operating and maintaining CCD and the NIV system, consular officers were still able to collect NIV applicant information during that period; however, processing of applications was significantly delayed with an almost 2-week backlog of NIVs. In the U.S. Consulate in São Paulo, a consular management official reported that due to this outage, the post had a backlog of about 30,000 NIV applications, or approximately 9 days’ worth of NIV interviews during peak season. Consular officers in 8 out of the 11 focus groups we conducted identified a lengthy CCD outage as a challenge to the efficient processing of NIVs. IT systems are not user friendly: In 9 out of 11 focus groups, consular officers described the IT systems for NIV processing as not user friendly. Officers in our focus groups explained that some aspects of the system hinder their ability to quickly and efficiently process NIVs. These aspects include a lack of integration among the databases needed for NIV adjudications, the need for manual scanning of documentation provided by an applicant, and an absence of standard keyboard shortcuts across all IT applications that would allow users to quickly copy information when processing NIV applications for related applicants, to avoid having to enter data multiple times. Some consular officers in our focus groups stated that they could adjudicate more NIVs in a day if the IT systems were less cumbersome and more user friendly. Consular officers in Beijing and Mexico City and consular management at one post indicated that the NIV system appeared to be designed without consideration for the needs of a high volume post, which include efficiently processing a large number of applications per adjudicator each day. According to consular officers, the system is poor at handling today’s high levels of demand because it was originally designed in the mid- 1990s. Consular officers in São Paulo stated that under current IT systems and programs, the post may not be able to process larger volumes that State projects it will have in the future. State, recognizing the limits of its current consular IT systems, initiated the development of a new IT platform. State is developing a new system referred to as “ConsularOne,” to modernize 92 applications that include systems such as CCD and the NIV system. According to State, ConsularOne will be implemented in six phases, starting with passport renewal systems and, in phase five, capabilities associated with adjudicating and issuing visas (referred to as non-citizen services). However, CST officials have yet to formally commit to when the capabilities associated with non-citizen services are to be implemented. According to a preliminary CST schedule, the enhanced capabilities associated with processing NIVs are not scheduled for completion until October 2019. Given this timeline, according to State officials, enhancements to existing IT systems are necessary and are being planned. Although consular officers and managers we spoke with identified CCD and the NIV system as one of the most significant challenges to the efficient processing of NIVs, State does not systematically measure end user (i.e., consular officers) satisfaction. We have previously reported that in order for IT organizations to be successful, they should measure the satisfaction of their users and take steps to improve it. The Software Engineering Institute’s IDEALSM model is a recognized approach for managing efforts to make system improvements. According to this model, user satisfaction should be collected and used to help guide improvement efforts through a written plan. With such an approach, IT improvement resources can be invested in a manner that provides optimal results. Although State is in the process of upgrading and enhancing CCD and the NIV system, State officials told us that they do not systematically measure user satisfaction with their IT systems and do not have a written plan for improving satisfaction. According to CST officials, consular officers may voluntarily submit requests to CST for proposed IT system enhancements. Additionally, State officials noted that an IT stakeholder group comprising officials in State’s Bureau of Consular Affairs regularly meets to identify and prioritize IT resources and can convey end user concerns for the system. However, State has not collected comprehensive data regarding end user satisfaction and developed a plan to help guide its current improvement efforts. Furthermore, consular officers continued to express concerns with the functionality of the IT systems, and some officers noted that enhancements to date have not been sufficient to address the largest problems they encounter with the systems. Given consular officers’ reliance on IT services provided by CST, as well as the feedback we received from focus groups, it is critical that State identify and implement feedback from end users in a disciplined and structured fashion for current and any future IT upgrades. Without a systematic approach to measure end user satisfaction, CST may not be able to adequately ensure that it is investing its resources on improvement efforts that will improve performance of its current and future IT systems for end users. Travel and tourism are important contributors to U.S. economic growth and job creation. According to Commerce, international travelers contributed $220.6 billion to the economy and supported 1.1 million jobs in 2014. Processing visas for such travelers as efficiently and effectively as possible without compromising our national security is critical to maintaining a competitive and secure travel and tourism industry in the United States. Although State has historically struggled with the task of maintaining reasonable wait times for NIV interviews, it has undertaken a number of efforts in recent years that have yielded substantial progress in reducing such waits. Significant projected increases in NIV demand coupled with consular hiring constraints and other challenges could hinder State’s ability to sustain this progress in the future—especially in countries where the demand for visas is expected to rise the highest. These challenges heighten the importance of systematically evaluating the cost and impact of the multiple measures State has taken to reduce interview wait times in recent years and leveraging that knowledge in future decision making. Without this, State’s ability to direct resources to those activities that offer the greatest likelihood of success is limited. Moreover, State’s future capacity to cope with rising NIV demand will be challenged by inefficiencies in its visa processing technology; consular officers and management officials at the posts we visited pointed to cumbersome user procedures and frequent system failures as enormous obstacles to efficient NIV processing. State’s Bureau of Consular Affairs recognizes these problems and plans a number of system enhancements; however, the bureau does not systematically collect input from consular officers to help guide and prioritize these planned upgrades. Without a systematic effort to gain the input of those who employ these systems on a daily basis, State cannot be assured that it is investing its resources in a way that will optimize the performance of these systems for current and future users. To further improve State’s processing of nonimmigrant visas, we recommend that the Secretary of State take the following two actions: 1. Evaluate the relative impact of efforts undertaken to reduce nonimmigrant visa interview wait times to help managers make informed future resource decisions. 2. Document a plan for obtaining end user (i.e., consular officers) input to help improve end user satisfaction and prioritize enhancements to information technology systems. We provided a draft of this report for review and comment to State, Commerce, and DHS. We received written comments from State, which are reprinted in appendix II. State agreed with both of our recommendations and highlighted a number of actions it is taking or plans to take to implement them. Commerce and DHS did not provide written comments on the report. State and DHS provided a number of technical comments, which we have incorporated throughout the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of State, the Secretary of Commerce, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report reviews Department of State’s (State) nonimmigrant visa (NIV) processing operations and provides an update on the status of the goals in Executive Order (E.O.) 13597. Specifically, this report examines (1) the efforts State has undertaken to expand capacity and reduce NIV applicants’ interview wait times and the reported results to date, and (2) the challenges that impact State’s ability to efficiently process NIVs. To accomplish our objectives, we reviewed relevant State and Department of Homeland Security (DHS) documents, and interviewed State, DHS, and Department of Commerce (Commerce) officials. In addition, we observed consular operations and interviewed U.S. government officials at four posts—the U.S. Embassy in Beijing, China; the U.S. Embassy in New Delhi, India; the U.S. Embassy in Mexico City, Mexico; and the U.S. Consulate in São Paulo, Brazil. For our site visits, we selected posts that (1) were in countries specifically mentioned in E.O. 13597, (2) experienced NIV interview wait time problems previously, or (3) were in countries that have the highest levels of U.S. NIV demand in the world. During these visits, we observed visa operations; interviewed consular staff and embassy management about NIV adjudication policies, procedures, and resources; conducted focus groups with consular officers; and reviewed documents and data. Our selection of posts was not intended to provide a generalizable sample but allowed us to observe consular operations at some of the highest NIV demand posts worldwide. To determine the efforts State has undertaken to expand capacity and reduce NIV applicants’ interview wait times, we reviewed relevant documents and interviewed officials from State and DHS. To determine the reported results of those efforts, we collected and analyzed data on NIV processing capacity and NIV interview wait times worldwide from January 2011 until July 2015 and compared them to the goals outlined in E.O. 13597 and reviewed documentation provided by State on their efficiency efforts. For NIV interview wait time data, we focused our analysis on B visas and not on other NIV categories because this is how State measures visa wait times against the goals specified in E.O. 13597, and because B visas represent most NIVs. For example, B visas represent 79 percent of all NIVs processed in fiscal year 2014. To determine the reliability of State’s data on NIV wait times for applicant interviews, we reviewed the department’s procedures for capturing these data, interviewed the officials in Washington, D.C., who monitor and report these data, and examined data that were provided to us electronically. In addition, we interviewed the corresponding officials from our visits to select posts overseas and in Washington, D.C., who input and use the NIV interview wait time data. While some posts occasionally did not update their NIV wait time data on a weekly basis, we found the data to be sufficiently reliable for the purposes of determining the percentage of posts that were below the 3-week NIV interview wait time threshold established by E.O. 13597. To determine the challenges that impact State’s ability to efficiently process NIVs, we reviewed relevant documents, including State planning and NIV demand projections, interviewed State, DHS, and Commerce officials in Washington, D.C., including officials from State’s Office of Inspector General, and conducted focus groups with consular officers. We also reviewed State’s documentation on its information technology systems, including the Consular Consolidated Database, the NIV system, and the development plans for the ConsularOne system. To determine the reliability of State’s NIV applicant projections, we reviewed the department’s projections and interviewed the officials that develop the projections. We found the data to be sufficiently reliable for the purposes of providing a baseline for possible NIV demand through 2019. To balance the views of State management and obtain perspectives of consular officers on State’s NIV processing, we conducted 11 focus group meetings with randomly selected entry-level consular officers that conduct NIV interviews and adjudications at the four posts we visited. These meetings involved structured small-group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Consistent with typical focus group methodologies, our design included multiple groups with varying characteristics but some similarity in experience and responsibility. Most groups involved 6 to 10 participants. Discussions were structured, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. Our overall objective in using a focus group approach was to obtain the views, insights, and feelings of entry-level consular officers on issues related to their workload, the NIV process, and challenges they face as consular officers conducting NIV applicant interviews and adjudications. We assured participants of the anonymity of their responses, promising that their names would not be directly linked to their responses. We also conducted one pretest focus group and made some revisions to the focus group guide accordingly. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, the information includes only the responses of entry-level consular officers from the 11 selected groups. Second, participants were asked questions about their specific experiences with the NIV process and challenges they face as consular officers conducting NIV applicant interviews and adjudications. Other entry-level consular officers who did not participate in our focus groups or were located at different posts may have had different experiences. Because of these limitations, we did not rely entirely on focus groups but rather used several different methodologies to corroborate and support our conclusions. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual mentioned above, Godwin Agbara (Assistant Director, International Affairs and Trade), Kathryn Bernet (Assistant Director, Homeland Security and Justice), Nicholas Marinos (Assistant Director, Information Technology), Ashley Alley, Juan P. Avila, Justin Fisher, Kaelin Kuhn, Jill Lacey, Christopher J. Mulkins, and Jasmine Senior made key contributions to this report. Technical assistance was provided by Karen Deans, Katherine Forsyth, Kara Marshall, and Tina Cheng.
International travel and tourism contributed $220 billion to the U.S. economy and supported 1.1 million jobs in 2014, according to the Department of Commerce. A portion of those travelers to the United States were required to obtain an NIV. After a period in which travelers experienced extensive waits in obtaining a required interview for an NIV in 2011, the President issued E.O. 13597 in 2012 to improve visa and foreign visitor processing, while continuing to protect U.S. national security. The E.O. set goals for State to increase NIV processing capacity in Brazil and China and reduce NIV interview wait times for applicants worldwide. This report examines (1) efforts State has undertaken to expand capacity and reduce NIV applicants' interview wait times and the reported results to date and (2) challenges that impact State's ability to efficiently process NIVs. GAO analyzed State's historical and forecast NIV data and interviewed State officials in Washington, D.C., and consular officers and management in Brazil, China, India, and Mexico. These countries represent the four highest demand countries for U.S. NIVs. Since 2012, the Department of State (State) has undertaken several efforts to increase nonimmigrant visa (NIV) processing capacity and decrease applicant interview wait times. Specifically, it has increased consular staffing levels and implemented policy and management changes, such as contracting out administrative support services. According to State officials, these efforts have allowed State to meet the goals of Executive Order (E.O.) 13597 of increasing its NIV processing capacity by 40 percent in Brazil and China within 1 year and ensuring that 80 percent of worldwide NIV applicants are able to schedule an interview within 3 weeks of State receiving their application. Specifically, State increased the number of consular officers in Brazil and China by 122 and 46 percent, respectively, within a year of the issuance of E.O. 13597. Additionally, according to State data, since July 2012, at least 80 percent of worldwide applicants seeking a tourist visa have been able to schedule an interview within 3 weeks. Two key challenges—rising NIV demand and problems with NIV information technology (IT) systems—could affect State's ability to sustain the lower NIV interview wait times. First, State projects the number of NIV applicants to rise worldwide from 12.4 million in fiscal year 2014 to 18.0 million in fiscal year 2019, an increase of 45 percent (see figure). Given this projected NIV demand and budgetary limits on State's ability to hire more consular officers at posts, State must find ways to achieve additional NIV processing efficiencies or risk being unable to meet the goals of E.O. 13597 in the future. Though State's evaluation policy stresses that it is important for bureaus to evaluate management processes to improve their effectiveness and inform planning, State has not evaluated the relative effectiveness of its various efforts to improve NIV processing. Without conducting a systematic evaluation, State cannot determine which of its efforts have had the greatest impact on NIV processing efficiency. Second, consular officers in focus groups expressed concern about their ability to efficiently conduct adjudications given State's current IT systems. While State is currently enhancing its IT systems, it does not systematically collect information on end user (i.e., consular officer) satisfaction to help plan and guide its improvements, as leading practices would recommend. Without this information, it is unclear if these enhancements will address consular officers' concerns, such as having to enter the same data multiple times, and enable them to achieve increased NIV processing efficiency in the future. To improve State's ability to process NIVs, while maintaining a high level of security to protect our borders, GAO is recommending that State (1) evaluate the relative impact of efforts undertaken to improve NIV processing and (2) document a plan for obtaining input from end users (consular officers) to help improve their satisfaction and prioritize enhancements to IT systems. State concurred with both recommendations.
PEBES legislation required SSA to begin sending benefit estimate statements to workers aged 60 and older in fiscal year 1995 and to those turning 60 during each fiscal year from 1996 through 1999; starting in fiscal year 2000, SSA must send the PEBES annually to almost every worker aged 25 and older. However, to better manage the expected workload, SSA officials are sending the PEBES to many workers ahead of schedule. As a result, most workers aged 40 and older—about 65 million—will have received their first statement by the end of 1998. The PEBES was conceived as a means to inform the public about the benefits available under the Old Age and Survivors Insurance (OASI) and the Disability Insurance (DI) programs, which together are commonly known as “Social Security.” These programs provide monthly cash benefits to retired and disabled workers and their dependents and survivors. The benefit amounts are based primarily on a worker’s earnings. By providing individual workers with a listing of their yearly earnings on record at SSA and estimates of the benefits they may receive, SSA hopes to better ensure that its earnings records are complete and accurate and to assist workers in planning for their financial future. As a result of profound demographic changes—such as the aging of the baby boom generation and increasing life expectancy—Social Security’s yearly expenditures are expected to exceed its yearly tax revenue beginning in 2013. Without corrective legislation, the trust funds are expected to be depleted by 2032, leaving insufficient funds to pay the current level of benefits. As a result of the financial problems facing the program, a national debate on how to ensure Social Security’s solvency has begun and will likely intensify. Ensuring long-term solvency within the current program structure will require either increasing revenues or reducing expenditures, or some combination of both. This could be achieved through a variety of methods, such as raising the retirement age, reducing inflation adjustments, increasing payroll taxes, and investing trust fund reserves in securities with potentially higher yields than the U.S. Treasury securities currently held by the trust funds. Some options for change, however, would fundamentally alter the program structure by setting up individual retirement savings accounts managed by the government or personal security accounts managed through the private sector. Both of these options would permit investments in potentially higher yielding securities. Proponents of adding rates of return to the PEBES believe these rates would provide individuals with information on the current program and enable them to compare their rate of return for Social Security with rates for other investments. Analysts disagree about whether it is appropriate to use rates of return to evaluate the Social Security program and the options for reform. Furthermore, using rates of return for Social Security presents a number of difficulties. Estimates would be based on a variety of assumptions, such as how long the worker is expected to live after retirement, and other decisions, such as whether to include disability benefits. These uncertainties and how they affect individual rates of return would need to be explained. Also, comparing rates of return for Social Security with rates for private market investments presents a variety of difficulties, such as how to handle transaction costs and the differences in the level of risk, which also need to be accounted for in the comparison. Social Security contributions are mostly used to pay benefits to current beneficiaries and are not deposited in interest-bearing accounts for individual workers. In fact, benefit payments to any given individual are derived from a formula that does not use interest rates or the amount of contributions. Still, the benefits workers will eventually receive reflect a rate of return they implicitly receive on their contributions. This rate of return is equal to the average interest rate workers would have to receive on their contributions in order to pay for all the benefits they will receive from Social Security. As part of the Social Security reform debate, some analysts contend that comparing rates of return for Social Security with rates for the private market will help individuals understand that they could have potentially higher retirement incomes with a new system of individual retirement savings accounts. Moreover, they believe that the new system would produce real additions to national saving. In turn, new saving would generate economic and productivity growth that yields real returns to society and to consumers. They assert that Social Security, in contrast, only transfers income from taxpayers to beneficiaries, detracts from saving and long-term economic growth, and produces no real economic returns. Other analysts, however, contend that the rate of return concept should not be applied to Social Security for various reasons. First, they observe that Social Security is a social insurance program that helps protect workers and retirees against a variety of risks over which they often have little control, such as the performance of the economy and inflation. For example, the Social Security program is designed to help ensure that low-wage earners have adequate income in their retirement. Second, some analysts observe that Social Security simply transfers money from taxpayers to beneficiaries and is not designed to provide returns on contributions. Third, some analysts believe that the full value of the program cannot be determined solely by comparing monetary benefits and contributions. For example, individuals benefit from Social Security in other, more general ways through reductions in poverty and being relieved of providing for their parents and other beneficiaries through some other means. Rate of return estimates will vary according to what contributions and benefits they include. Moreover, actual rates of return for individuals will differ substantially from estimates due to the uncertainty of several factors, such as how long they will live, how much they will earn, and what size families they will have. To be clearly understood, rate of return estimates for Social Security need an explanation of how they are calculated and how uncertain the estimates are. Estimates of rates of return on contributions need to be clear about which benefits are included. For example, Social Security provides benefit payments to many individuals other than retired workers. In 1996, retired workers accounted for 61 percent of all Social Security beneficiaries, and they received 68 percent of the benefits. Other beneficiaries include disabled workers, survivors of deceased workers, and spouses and children of retired and disabled workers. If the calculations include the full range of benefits provided by the Social Security program, rather than retirement benefits alone, then the calculations would also need to include the full range of contributions made for those benefits. Conversely, if the calculations include only the retirement portion of the benefits, then the contributions would need to be reduced accordingly. Social Security contributions are made by employers as well as employees. Currently, both the individual and the employer pay a 6.2 percent tax on covered earnings for OASI and DI combined. Most rate of return estimates prepared by analysts include both the employer and employee shares; however, some analysts believe the employer contributions should not be included. Analysts using both employer and employee contributions argue that employees ultimately pay the employer share because employers pay lower wages than they would if the employer contribution did not exist. Furthermore, estimates that leave out the employer contributions reflect the full benefits but not the full costs of providing those benefits. A number of other issues affect benefits, contributions, or both and would need to be disclosed with the rate of return estimate. For example, Social Security benefits are automatically adjusted for inflation. In addition, even if the disability benefits and corresponding contributions are not included in the return estimates, OASI benefits provided for families of workers who die before retirement should be included. Finally, how much individuals contribute and how much they receive in benefits depend on when they retire and whether they continue to work while receiving benefits; this could be addressed by assuming a standard retirement age. Many factors that would be included in rate of return estimates for Social Security are subject to considerable uncertainty, and these uncertainties mean that the actual rates of return that individuals receive could vary substantially from their estimates. Such factors include how long individuals will live, how much they will earn in the future, whether their contributions will also entitle their spouses or children to benefits, and what changes the Congress may make to contribution and benefit levels. These uncertainties suggest that individual estimates would be very rough and might best be described as a range of rates. The literature examining rates of return almost always discusses them in the context of the reform debate and, therefore, examines average rates for large groups of people with similar characteristics, such as birth year, income level, and gender. Such average group rates can be estimated with a reasonable degree of accuracy and precision, but an individual’s actual experience may be dramatically different. Rate of return estimates depend fundamentally on individual earnings histories, which are used to project workers’ future earnings, calculate their benefits, and estimate the amount of their contributions. Because rate of return estimates for Social Security rely on projected earnings, they are inherently uncertain. In addition, younger workers’ rates of return would be even more uncertain since they have more years for which earnings need to be projected. Under the current program structure, rate of return estimates would also need to reflect additional benefits provided by workers’ contributions. Their contributions not only entitle workers to retirement benefits but also entitle their spouses and children to survivor and dependent benefits. However, SSA’s records do not include information on whether a worker has a spouse or children unless and until such dependents apply for benefits based on the worker’s record. Moreover, neither SSA nor the workers can be certain who will have spouses, children, or survivors who might collect benefits based on the workers’ earnings records and how long their dependents will collect these benefits. In addition, in many families, both the husband and wife work and one may be “dually entitled” to benefits based on both workers’ records. Individuals are entitled to receive either a benefit based on their own earnings or a benefit equal to 50 percent of the benefit calculated from their spouse’s record, whichever is greater. As a result of this benefit option, a dually entitled couple’s rate of return on their contributions is generally different than their individual rates. However, SSA has no way to connect a working couple’s two individual earnings records until one applies for benefits based on the other’s records. While some analysts have sought to compare rate of return estimates for Social Security with rates of return for private market investments—such as stocks, bonds, or savings accounts—these comparisons are not as straightforward as they first appear. Explanations would be needed to understand a number of important factors, including whether the rates of return incorporated the transaction and administrative costs for investments or annuities, the differences in risk associated with Social Security and private investments, and the questions of how to treat the costs of the benefits promised under the current system when switching to any other retirement system. Under typical Social Security privatization proposals, individual retirement savings accounts would offer workers the potential to receive higher rates of return on private investments than their Social Security contributions implicitly receive. However, private investments would entail a variety of transaction and administrative costs of their own, and these would vary depending on the nature of the proposal. For example, stockbrokers charge commissions for making trades, and mutual fund managers are compensated for managing the funds. Reflected in such costs are marketing and advertising expenses incurred as money managers and brokers compete for investors’ business. In contrast, SSA does not maintain actual accounts for each individual but rather keeps records of earnings. Administrative costs for Social Security’s OASI program are less than 1 percent of annual program revenues. Accurate rate of return comparisons would need to look at the rates after adjusting for expenses. Accurate rate of return comparisons also need to take into account the differences in risk associated with those rates. Over long periods of time, riskier private market investments, such as stocks, on average earn higher rates of return than less risky ones, such as government bonds. The riskier the investment, the greater the variation in possible investment earnings. By the same token, the riskier the investment made with retirement savings, the greater the variation in possible retirement incomes. Finally, if rates of return for Social Security are compared with rates for alternative reform proposals, the comparisons should indicate whether the rates for the alternatives take into account the costs of the benefits promised under the current Social Security program. Any rate of return comparisons should include these transition costs and not be limited to the return on private investments. The PEBES aims to provide information about the complex programs and benefits available through the Social Security program; however, the current statement is already lengthy and difficult to understand. Adding a rate of return, along with the corresponding narrative that would be needed to understand all of the underlying assumptions and uncertainties, would further complicate PEBES’ message. In addition, placing rate of return information on the statement would add significantly to SSA’s workload, according to SSA officials. Although the PEBES is intended to be a tool for communicating with the public, we raised concerns about the usefulness of the statement in a 1996 report. We reported that although the public feels the statement can be a valuable tool for retirement planning, the current PEBES provides too much information and fails to communicate clearly the information its readers need to understand SSA’s current programs and benefits. Comments from SSA’s public focus groups, SSA employees, and benefit experts indicate that the statement contains too much information. For example, SSA reported in a 1994 focus group summary that younger workers aged 25 to 35 wanted a more simplified, one-page statement with their estimated benefits and contributions. In addition, SSA telephone representatives said that they believe most people calling in with questions have read only the section of the statement that provides the benefit estimates. Since the PEBES addresses complex programs and issues, explaining these points in straightforward language can be challenging. Although SSA officials told us they attempt to use simple language geared for a seventh-grade reading level, feedback from the public and SSA staff indicates that readers are confused by several important explanations. For example, the public frequently asks about PEBES’ explanation of family benefits. Family benefits are difficult to calculate and explain because the amounts are based on information from both spouses’ records and SSA does not maintain information that links individuals’ records with those of their spouses. In addition, many people ask for clarification on certain terms used in the statement and on how their benefit estimates are calculated. Based on our recommendation, SSA is working on simplifying the PEBES. Agency officials are currently testing four alternative versions of the statement, and they plan to use the redesigned version of the PEBES for the fiscal year 2000 mailings. For rate of return information on the PEBES to be understood, SSA would need to (1) decide how much information to provide and (2) explain it in simple straightforward language—language that could be easily understood by the diverse population of workers slated to receive the statement. SSA would first need to define rate of return and explain that individuals’ rates could vary substantially from the estimates. In addition, readers would need to be cautioned that changes in the Social Security program due to long-term financing problems could affect their rates of return. Furthermore, SSA would need to explain the factors included in the calculation and all the underlying assumptions and uncertainties. As discussed previously, these would include the amounts that were used for the worker’s future earnings, whether the estimate includes the disability contributions and potential whether employer’s contributions were included along with the worker’s, the worker’s expected retirement age, the worker’s life expectancy after retirement, and how the estimate would vary if the worker’s spouse or children qualify for benefits on the worker’s record. The PEBES currently addresses how the benefit estimates treat some of these factors—future earnings, retirement ages, and family benefits. However, rate of return estimates are even more sensitive to these issues than benefit estimates; therefore, they would require further explanation. For example, the PEBES currently explains that the worker’s future earnings are projected to remain the same as the latest earnings on record. A rate of return estimate based on a steady level of earnings would be different from one in which the earnings vary. In addition, since the PEBES provides benefit estimates at three retirement ages, the statement would need to explain which of the three ages was used for the individual’s rate of return estimate. Finally, the statement’s complicated discussion of family benefits, which explains that the amount of these benefits is dependent on the worker’s benefit and the number of people in the family who would receive benefits, would need to be expanded. The explanation would need to indicate whether the individual’s rate of return estimate incorporates any family benefits and what effect family benefits would have on the individual’s rate of return. Along with the explanations needed for the rate of return itself, PEBES recipients would need to be cautioned regarding the limitations of comparing a rate of return on Social Security with rates for alternative investments. Before making comparisons, recipients would need to know that the rate of return presented on their PEBES may need to be adjusted for other factors. As discussed earlier, these factors would include the difference in administrative costs of the alternative investments, the difference in the level of risk associated with the alternative investments, and how the costs of the benefits promised under the current program are treated. Furthermore, according to SSA, placing rate of return information on the PEBES would add significantly to workloads across the agency. For example, officials stated that they would expect the volume of calls about the rate of return information to dramatically increase their workload. Staff would need training to be prepared to respond to inquiries regarding the individual rates of return as well as how the rates compare with those for other investments. In addition, SSA officials said significantly changing the PEBES would be difficult to do in a timely manner. If individualized rates of return were to be added, SSA would need time to prepare the calculation, develop the explanations that would be needed to accompany the rates, test the new statement, make programming changes, and renegotiate the PEBES printing and mailing contract. Given the disagreement over whether it is appropriate to apply the rate of return concept to the Social Security program and the number of assumptions that must be factored into such an estimate, it would be especially important to fully explain how the rate was calculated and how uncertain the estimate could be. However, it has already been difficult to develop a PEBES that provides readily understandable information on the existing programs and benefits alone. Adding rate of return information could significantly increase the statement’s length and undermine SSA’s current efforts to shorten and simplify it. Given the detailed explanations that would be needed along with the estimates, adding rate of return information to the PEBES would most likely complicate an already complex statement. We obtained comments on a draft of this report from SSA. SSA agreed with our overall conclusions and said the report reflects the difficulties the agency would face in placing understandable rate of return information on the PEBES. In addition, SSA pointed out that it is working hard to make the information currently provided in the PEBES easy for readers to understand and use and agreed that adding rate of return information would increase the complexity of the statement. Finally, SSA provided technical comments, which we incorporated in this report where appropriate. SSA’s general and technical comments are reprinted in appendix II. We are sending copies of this report to the Commissioner of Social Security. Copies will also be made available to others on request. If you or your staff have any questions concerning this report, please call me or Kay E. Brown, Assistant Director, on (202) 512-7125. Other major contributors to this report include R. Elizabeth Jones, Evaluator-in-Charge, and Kenneth C. Stockbridge, Senior Evaluator. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the recent proposal that would require the Social Security Administration (SSA) to place on the Personal Earnings and Benefit Estimate Statements (PEBES) an individualized estimate of the rates of return workers receive on their contributions to the Social Security program, focusing on the: (1) general implications of using a rate of return for social security; and (2) challenges of including this information on the PEBES. GAO noted that: (1) there is substantial disagreement about whether the rate of return concept should be applied to the Social Security program; (2) supporters of such an application point out that a rate of return would provide individuals information about the return they receive on their contributions to the program; (3) however, others contend that it is inappropriate to use rate of return estimates for social security because the program is designed to pursue social insurance goals, such as ensuring that low-wage earners have adequate income in their old age or that dependent survivors are adequately provided for; (4) in addition, calculations for rates of return rely on a number of assumptions that affect the resulting estimates; (5) for individuals, the actual rates of return can vary substantially from the estimates due to various uncertainties, such as a worker's actual retirement age and future earnings; (6) to be clearly understood, the underlying assumptions and their effect on the estimates should be explained in any presentation of rate of return information; (7) furthermore, comparing rate of return estimates for social security with estimates for private investments could be difficult for various reasons; (8) for example, the comparisons would need to indicate whether the estimates for other investments include the transaction and administrative costs and the differences in risk associated with the social security trust funds and private investments; (9) providing rate of return information on the PEBES could further complicate and lengthen an already complex and difficult-to-understand statement; (10) in GAO's previous work, it concluded that the current PEBES is too long and its explanations of social security's complex programs are not easy for the public to understand; and (11) adding rate of return estimates to the PEBES would require detailed explanations about how the calculations were made and what assumptions were used about comparing a rate of return for social security with rates for private investments.
Community policing is a philosophy under which local police departments develop strategies to address the causes of and reduce the fear of crime through problemsolving tactics and community-police partnerships. According to the COPS Office program regulations, there is no one approach to community policing implementation. However, community policing programs do stress three principles that make them different from traditional law enforcement programs: (1) prevention, (2) problemsolving, and (3) partnerships (see app. II). Community policing emphasizes the importance of police-citizen cooperation to control crime, maintain order, and improve the quality of life in communities. The police and community members are active partners in defining the problems that need to be addressed, the tactics to be used in addressing them, and the measurement of the success of the efforts. The practice of community policing, which emerged in the 1970s, was developed at the street level by rank-and-file police officers. Justice supported community policing and predecessor programs for more than 15 years before the current COPS grant program was authorized. Previous projects noted by Justice officials as forerunners to the funding of community policing included Weed and Seed, which was a community- based strategy to “weed out” violent crime, gang activities, and drugs and to “seed in” neighborhood revitalization. House and Senate conferees, in their joint statement explaining actions taken on the Community Policing Act, emphasized their support of grants for community policing. The conferees noted that the involvement of community members in public safety projects significantly assisted in preventing and controlling crime and violence. As shown in table 1, $5.2 billion was authorized for the COPS grant program from its inception in fiscal year 1995 to the end of fiscal year 1997; $4.1 billion of which was appropriated over this period. The Community Policing Act does not target grants to law enforcement agencies on the basis of which agency has the greatest need for assistance. Rather, agencies are required to demonstrate a public safety need and an inability to address this need without a grant. Grantees are also required to contribute 25 percent of the costs of the program, project, or activity funded by the grant, unless the Attorney General waives the matching requirement. According to Justice officials, the basis for waiver of the matching requirements is extraordinary local fiscal hardship. In one of our previous reports, we reviewed alternative strategies, including targeting, for increasing the fiscal impact of federal grants. We noted that federal grants have been established to achieve a variety of goals. If the desired goal is to target fiscal relief to areas experiencing greater fiscal stress, grant allocation formulas could be changed to include a combination of factors that allocate a larger share of federal aid to those states with relatively greater program needs and fewer resources. The Community Policing Act also requires that grants be used to supplement, not supplant, state and local funds. To prevent supplanting, grantees must devote resources to law enforcement beyond those resources that would have been available without a COPS grant. In general, grantees are expected to use the hiring grants to increase the number of funded sworn officers above the number on board in October 1994, when the program began. Grantees are required to have plans to assume a progressively larger share of the cost over time, looking toward keeping the increased hiring levels by using state and local funds after the expiration of the federal grant program at the end of fiscal year 2000. Assessing whether supplanting has taken place in the community policing grant program was outside the scope of our review. However, in our previously mentioned report on grant design, our synthesis of literature on the fiscal impact of grants suggested that each additional federal grant dollar results in about 40 cents of added spending on the aided activity. This means that the fiscal impact of the remaining 60 cents is to free up state or local funds that otherwise would have been spent on that activity for other programs or tax relief. Monitoring is an important tool for Justice to use in ensuring that law enforcement jurisdictions funded by COPS grants comply with federal program requirements. The Community Policing Act requires that each COPS Office program, project, or activity contain a monitoring component developed pursuant to guidelines established by the Attorney General. In addition, the COPS program regulations specify that each grant is to contain a monitoring component, including periodic financial and programmatic reporting and, in appropriate circumstances, on-site reviews. The regulations state that the guidelines for monitoring are to be issued by the COPS Office. COPS Office grant-monitoring activities during the first 2-1/2 years of the program were limited. Final COPS Office monitoring guidance had not been issued as of June 1997. Information on activities and accomplishments for COPS-funded programs was not consistently collected or reviewed. Site visits and telephone monitoring by grant advisers did not systematically take place. COPS Office officials said that monitoring efforts were limited due to a lack of grant adviser staff and an early program focus on processing applications to get officers on the street. According to a COPS Office official, as of July 1997, the COPS Office had about 155 total staff positions, up from about 130 positions that it had when the office was established. Seventy of these positions were for grant administration, including processing grant applications, responding to questions from grantees, and monitoring grantee performance. The remaining positions were for staff who worked in various other areas, including training; technical assistance; administration; and public, intergovernmental, and congressional liaison. In January 1997, the COPS Office began taking steps to increase the level of its monitoring. It developed monitoring guidelines, revised reporting forms, piloted on-site monitoring visits, and initiated telephone monitoring of grantees’ activities. As of July 1997, a COPS Office official said that the office had funding authorization to increase its staff to 186 positions, and it was in the process of hiring up to this level. In commenting on our draft report, COPS officials also noted that they were recruiting for more than 30 staff positions in a new monitoring component to be exclusively devoted to overseeing grant compliance activities. COPS Office officials also said that some efforts were under way to review compliance with requirements of the Community Policing Act that grants be used to supplement, not supplant, local funding. In previous work, we reported that enforcing such provisions of grant programs was difficult for federal agencies due to problems in ascertaining state and local spending intentions. According to the COPS Office Assistant Director of Grant Administration, the COPS Office’s approach to achieving compliance with the nonsupplantation provision was to receive accounts of potential violations from grantees or other sources and then to work with grantees to bring them into compliance, not to abruptly terminate grants or otherwise penalize grantees. COPS Office grant advisers attempted to work with grantees to develop mutually acceptable plans for corrective actions. Although the COPS Office did not do proactive investigations of potential supplanting, its three-person legal staff reviewed cases referred to it by grant advisers, grantees, and other sources. COPS Office officials said that they also expected that referrals to Justice’s Legal Division will result from planned monitoring activities. Of the 506 inquiries that required follow-up by the Legal Division as of December 1996, about 70 percent involved potential supplanting. In addition, Justice’s Inspector General began a review in fiscal year 1997 that was to assess, among other things, how COPS grant funds were used, including whether supplanting occurred. In the course of this review, the Inspector General planned to complete 50 audits of grantees by the end of fiscal year 1997. The Office of Justice Programs also conducted financial monitoring of COPS grants, which officials said is to include review of financial documents and visits to 160 sites by the end of fiscal year 1997. In April 1997, COPS Office officials said that they were discussing ways to encourage grantees to sustain hiring levels achieved under the grants, in light of the language of the Community Policing Act regarding the continuation of these increased hiring levels after the conclusion of federal support. The COPS Office officials also noted in commenting on our draft report that they had sent fact sheets to all grantees explaining the legal requirements for maintaining hiring levels. However, the COPS Office Director also noted that the statute needed to be further defined and that communities could not be expected to maintain hiring levels indefinitely. A reasonable period for retaining the officers funded by the COPS grants had not been determined. Law enforcement agencies in small communities were awarded most of the COPS grants. As shown in figure 1, 6,588 grants—49 percent of the total 13,396 grants awarded—were awarded to law enforcement agencies serving communities with populations of fewer than 10,000. Eighty-three percent—11,173 grants—of the total grants awarded went to agencies serving populations of fewer than 50,000. Large cities—with populations of over 1 million—were awarded only about 1 percent of the grants, but these grants made up over 23 percent—about $612 million—of the total grant dollars awarded. About 50 percent of the grant funds were awarded to law enforcement agencies serving populations of 150,000 or less, and about 50 percent of the grant funds were awarded to law enforcement agencies serving populations exceeding 150,000, as the Community Policing Act required. As shown in figure 2, agencies serving populations of fewer than 50,000 also received about 38 percent of the total grant dollars—over $1 billion. In commenting on our draft report, the COPS Office noted that these distributions were not surprising given that the vast majority of police departments nationwide are also relatively small. The COPS Office also noted that the Community Policing Act requires that the level of assistance given to large and small agencies be equal. As of the end of fiscal year 1996, after 2 years of operation, the COPS Office had issued award letters to 8,803 communities for 13,396 grants totaling about $2.6 billion. Eighty-six percent of these grant dollars were to be used to hire additional law enforcement officers. MORE program grant funds were to be used to buy new technology and equipment, hire support personnel, and/or pay law enforcement officers overtime. Other grant funds were to be used to train officers in community policing and to develop innovative prevention programs, including domestic violence prevention, youth firearms reduction, and antigang initiatives. The Community Policing Act specifies that no more than 20 percent of the funds available for COPS grants in fiscal years 1995 and 1996 and no more than 10 percent of available funds in fiscal years 1997 through 2000 were to be used for MORE program grants. Table 2 shows the number and amount of the COPS grants (awarded in fiscal years 1995 and 1996) by the type of grant. Figure 3 shows the distribution of community policing grant dollars awarded by each state and Washington, D.C. Our survey results showed that in fiscal years 1995 and 1996, grantees were awarded an estimated $286 million (plus or minus 3 percent) in MORE program funds to use for purchases of technology and equipment, hiring of support personnel, and/or payment of law enforcement officers’ overtime. We estimated that, as of the end of fiscal year 1996, 61 percent of these funds had been spent to hire civilian personnel. According to our survey, MORE grantees had spent an estimated $90.1 million in fiscal years 1995 and 1996, a little less than one-third of the $286 million in MORE funds they were awarded. Overall, we estimated that about 61 percent of the MORE program grant funds spent during the first 2 years of the program was to hire civilian personnel. About 31 percent of the funds went for the purchase of technology and/or equipment, primarily computers, and about 8 percent was spent on overtime for law enforcement officers. Figure 4 shows how these funds were spent. Civilian personnel ($55.8 million) Time savings achieved through MORE program grant awards were to be applied to community policing. Allowable technology and equipment purchases were generally computer hardware or software. Some technology/equipment items, such as police cars, weapons, radios, radar guns, uniforms, and office equipment—such as fax machines and copiers—could not be purchased with the grant funds. Additional support resources for some positions, such as community service technicians, dispatchers, and clerks, were allowable. Law enforcement officers’ overtime was to be applied to community policing activities. Overtime was not funded for the 1996 application year. Distributions of MORE program grant expenditures were heavily influenced by the expenditures of one large jurisdiction, the New York City Police Department. This police department was awarded about one-third of the total amount of MORE grant funds awarded and had spent about one-half of all MORE grant funds expended nationwide. About 86 percent of the money that the department spent, or $38.7 million, was for the hiring of civilian personnel. Excluding the New York City Police Department’s expenditures, the highest percentage of expenditures went for purchases of technology and/or equipment, which represented about 48 percent of the MORE program grant spending by all other grantees. Table 3 shows the percentages of MORE grant funds expended for all survey respondents, the New York City Police Department, and all other survey respondents after excluding the New York City Police Department. In commenting on our draft report, COPS officials noted that nearly two-thirds of the MORE program funds awarded nationwide were for purchases of technology and/or equipment. The officials believed that significant local procurement delays may explain our finding that most expenditures through fiscal year 1996 were for civilian personnel hiring. We asked survey respondents to calculate the number of officer full-time-equivalent positions that their agency had redeployed to community policing as a result of MORE program grant funds spent in fiscal years 1995 and 1996. The respondents were asked to do these calculations using instructions provided to them in the original MORE program grant application package. (See p. 18 for a discussion of how these calculations were to be made.) We estimated that nearly 4,800 (plus or minus 9 percent) officer full-time-equivalent positions had been redeployed. Of these, about 40 percent of the positions were redeployed as a result of technology and/or equipment purchases, about 48 percent of the positions were attributable to hiring civilian personnel, and about 12 percent of the positions were a result of law enforcement officers’ overtime. The total full-time-equivalent positions were associated with an estimated $82 million, or about 91 percent of the MORE program grant funds spent, because some survey respondents reported that they were not able to calculate positions redeployed to community policing. The most common reasons the respondents cited for not being able to do so were that equipment that had been purchased had not yet been installed, and/or that it was too early in the implementation process to make calculations of time savings. We estimated based on our mail survey responses that about 2,400 full-time civilian personnel were hired with MORE program funds spent in fiscal years 1995 and 1996. The most frequently reported technology or equipment purchases were mobile data computers or laptops, personal computers, other computer hardware, and crime analysis computer software. As of June 1997, a total of 30,155 law enforcement officer positions funded by COPS grants were estimated by the COPS Office to be on the street. COPS Office estimates of the numbers of new community policing officers on the street were based on three funding sources: (1) officers on board as a result of COPS hiring grants; (2) officers redeployed to community policing as a result of time savings achieved through technology and equipment purchases, hiring of civilian personnel, and/or law enforcement officers’ overtime funded by the MORE grant program; and (3) officers funded under the Police Hiring Supplement Program, which was in place before the COPS grant program. According to COPS Office officials, the office’s first systematic attempt to estimate the progress toward the goal of 100,000 new community policing officers on the street was a telephone survey of grantees done between September and December, 1996. COPS Office staff contacted 8,360 grantees to inquire about their progress in hiring officers and getting them on the street. According to a COPS Office official, a follow-up survey, which estimated 30,155 law enforcement officer positions to be on the street, was done between late March and June, 1997. The official said that this survey was contracted out because the earlier in-house survey had been extremely time consuming. The official said that, as of May 1997, the office was in the process of selecting a contractor to do three additional surveys during fiscal year 1998. In addition to collecting data through telephone surveys on the numbers of new community policing officers hired with hiring grants, the COPS Office reviewed information provided by grantees on officers redeployed to community policing as a result of time savings achieved by MORE program grants. To receive MORE program grants, applicants are required to calculate the time savings that would result from the grants and apply the time to community policing activities. To assist applicants in doing these calculations, the COPS Office provided examples in the grant application package. “Hessville is a rural department with 20 sworn law enforcement officers. Officers in the Hessville Police Department spend an average of three hours each per shift typing reports by hand at the station. Based on information collected from similar agencies that have moved to an automated field-report-writing system, the department determines that if all of the patrol cars are equipped with laptop computers, the same tasks will take the officers only two hours each per shift to complete—a of one hour per officer, per shift. “On any given day, 10 officers in the Hessville Police Department will use the four laptop computers being requested (some laptops will be reused by officers on different shifts) to complete paperwork in their patrol cars. Since each officer is expected to save an hour of time each day as a result of using the computers, 10 hours of sworn officer time will be saved by the agency each day, which would equal approximately 1.3 FTEs (full time equivalents) of redeployment over the course of one year, using a standard of 1,824 hours (228 days) for an FTE.” The COPS Office also counted toward the 100,000-officers goal 2,000 positions funded under the Police Hiring Supplement Program, which was administered by another Justice component before the COPS grants program was established. An official said that a policy decision had been made early in the establishment of the COPS Office to include these positions in the count. Special law enforcement agencies, such as those serving Native American communities, universities and colleges, and mass transit passengers, were awarded 329 hiring grants in fiscal years 1995 and 1996. This number was less than 3 percent of the 11,434 hiring grants awarded during the 2-year period. We reviewed application files for 293 of these grants and found that almost 80 percent were awarded to Native American police departments and university or college law enforcement agencies. Other special agencies included mass transit, public housing, and school police. The COPS Office also considered new police departments as special agencies. The awards to special agencies averaged about $291,000 per grant. The 293 special agency grantees applied most frequently to use officers hired with the COPS funds to (1) write strategic plans for community policing, (2) provide community policing training for citizens and/or law enforcement officers, (3) meet regularly with community groups, and (4) develop neighborhood watch programs and antiviolence programs. We provided a draft of this report for comment to the Attorney General and received comments from the Director of the COPS Office. The comments are reprinted in appendix III. The COPS Office also provided some additional information and oral technical comments. The COPS Office generally agreed with the information we presented and provided updates on the progress of the office on some of the issues addressed in the report. These comments are incorporated in the report where appropriate. We are sending copies of this report to the Ranking Minority Members of your Committee and Subcommittee and other interested parties. We will also make copies available to others on request. The major contributors to this report are listed in appendix IV. Please feel free to call me at (202) 512-3610 if you have questions or need additional information. To determine grant program design features in the Public Safety Partnership and Community Policing Act of 1994, we reviewed the act and its legislative history and discussed the results of our review with COPS Office officials. To determine how the COPS Office monitored the use of grants it awarded, we reviewed documentation on monitoring procedures and interviewed officials about actions taken and planned. To determine how COPS grants were distributed nationwide, we obtained COPS Office data files on all grants awarded in fiscal years 1995 and 1996, and we analyzed the distributions by grant type; by population size reported to the COPS Office; by recipient jurisdictions according to COPS data; and by state. The data reflect the number of grants for which applicants have been advised that they will receive funding and for which they have received estimated award amounts. They do not reflect dollar amounts of funds obligated by the COPS Office or actually spent by agencies that received the grants. To determine how law enforcement agencies used grants under the MORE program, we surveyed by mail a stratified, random sample of 415 out of a total of 1,524 agencies that had been awarded MORE grants as of September 30, 1996. Using COPS Office application data, we stratified the grant recipients into four population categories, according to the population of the jurisdiction served, and six total MORE grant award amount groups. The population categories were: fewer than 50,000; 50,000 to fewer than 100,000; 100,000 to fewer than 500,000; and 500,000 and over. The MORE grant award amount categories were: fewer than $10,000; $10,000 to fewer than $25,000; $25,000 to fewer than $50,000; $50,000 to fewer than $75,000; $75,000 to fewer than $150,000; and $150,000 or more. Regardless of population size, we selected all agencies that had accepted grants of $150,000 or more. We received usable responses from 366, or 88 percent, of our contacts with the sample of 415 agencies. All survey results were weighted to represent the total population of 1,524 MORE program grant recipients. Our questionnaire asked agencies to provide the following information as of September 30, 1996: (1) the total amount of MORE program grant funds accepted; (2) the categories under which grant funds were spent—technology and/or equipment, civilian personnel, or law enforcement officer overtime; (3) the types of technology and equipment purchases made or contracted to make; (4) the types of civilian personnel hired; and (5) the number of officer positions redeployed to community policing, according to calculations of time savings achieved through MORE program grant spending. We pretested the questionnaire by telephone with officials from judgmentally selected MORE program grant recipients, and we revised the questionnaires on the basis of this input. To the extent practical, we attempted to verify the completeness and accuracy of the survey responses. We contacted respondents to obtain answers to questions that were not completed and to resolve apparent inconsistencies between answers to different questions. To determine the process the COPS Office used to calculate the number of officers on the street, we interviewed officials and reviewed documentation on how calculations were made. To describe funding distributions and uses of COPS hiring grants in special law enforcement agencies, we used a data collection instrument to review the COPS Office’s grant application files of hiring grants accepted by special law enforcement agencies. We reviewed 293 of the 329 (89 percent) hiring grants that were awarded to special agencies in fiscal years 1995 and 1996, according to COPS Office data. The 36 files that we did not review were in use by COPS Office staff at the time we did our work. We looked at how community policing was implemented in six locations that had received COPS grants. The locations we visited were Los Angeles, Los Angeles County, and Oxnard, CA; Prince George’s County, MD; St. Petersburg, FL; and Window Rock, AZ (Navajo Nation). These locations were judgmentally selected to include four city or county police departments and two special law enforcement agencies. The departments we visited were in varying stages of implementing community policing activities. They served communities with populations ranging from 155,000 to over 1 million. Table II.1 provides additional information about the locations we visited. In each law enforcement jurisdiction, we did structured interviews with the police chief or community policing coordinator, a panel of community policing officers, and representatives of local government agencies and community groups involved in community policing projects. We discussed community policing projects and asked interviewees to characterize the level of support by their organization for community policing and to discuss what they viewed as major successes and limitations of community policing for their communities. Table II.2 lists the interviewees by job title. Los Angeles County, CA Chief, Metropolitan Transit Authority (MTA) Police Department Panel of community policing officers, MTA Police Department Senior Code Law Enforcement Officer, City of Lawndale Probation Officer, County of Los Angeles Project Director, Esteele Van Meter Multi-Purpose Center Assistant Principal, Manchester Elementary School (MTA officers work with students on campus) Police Chief, Oxnard Police Department Panel of community policing officers, Oxnard Police Department Assistant City Manager, City of Oxnard Chair, Inter-Neighborhood Community Committee (liaison between neighborhood councils and city departments) Marketing Director, AT&T President, Channel Islands National Bank President, Colonial Coalition Against Alcohol and Drugs Executive Director, El Concilio (Latino multiservice nonprofit) Coordinator, Interface Children and Family Services Director, Instructional Support Services at the Oxnard High School District Member, Sea Air Neighborhood Watch (continued) Prince George’s County, MD Community Policing Director, Prince George’s County Police Department Panel of community policing officers, Prince George’s County Police Department Public Safety Director, Prince George’s County Prince George’s County Multi-Agency Services Team (county agencies and the police address crime concerns in communities) Chair, Public Safety Issues, Interfaith Action Committee (consortium of churches involved in social service issues) Vice President, Government Affairs, Apartment and Building Owners Association Resident Manager, Whitfield Towne Apartments Chief and Director of Special Projects, St. Petersburg Police Department Panel of community policing officers, St. Petersburg Police Department Neighborhood Partnership Director, Office of the Mayor Executive Director and staff, St. Petersburg Housing Authority Administrator and staff, St. Petersburg Department of Leisure Services Chief, St. Petersburg Fire Department Executive Director and staff, Center Against Spouse Abuse Coordinators, Black on Black Crime Prevention Program and Intervention Program, Pinellas County Urban League Director, Criminal Justice Administration, Operations Parental Awareness and Responsibility (PAR), Inc. Window Rock, AZ (Navajo Nation) Six law enforcement agencies we visited—three city police departments, one county police department, a Native American police department, and a mass transit police department—had a variety of community policing projects under way. The projects illustrated three key principles of community policing identified by the COPS Office: prevention, problemsolving, and partnerships. Representatives of community groups and other local government agencies working with the police on community policing activities were generally supportive of the community policing concept. Table II.3 provides examples of community policing projects in these locations. The projects ranged from starting 18 community advisory boards in neighborhoods throughout a major city to curbing drug activity by working with the resident manager and residents of an apartment complex. The police department established 18 Community Police Advisory Boards. Each board consisted of 25 volunteers whose roles were to advise and inform area commanding officers of community concerns (e.g., enforcement of curfew laws and education on domestic violence). Each board used community and police support to address the problems that had been identified. Interviewees said the boards had been effective in helping the police to build trust, involve citizens, solve problems, and reduce citizens’ fear of crime. The transit authority was part of a task force that addressed problems associated with loitering and drinking by day laborers on railroad property. Using community policing techniques such as problem identification and specific actions, such as clearing shrubs, painting over graffiti, and securing railroad ties that were being used to build tents for shelter, the task force resolved the problems. Oxnard, CA, Police Department “Street Beat” was an award-winning cable television series sponsored by local businesses and the cable company. Interviewees said the weekly series had been one of the department’s most effective community policing tools. Over 500 programs had been aired since 1985. Street beat offered crime prevention tips and encouraged citizens to participate in all of the department’s community policing activities. Over 300 departments contacted the Oxnard Police Department for information on replicating the television series in their cities. (continued) Citizens, the resident manager, and a community policing officer worked to remove drug dealers from an apartment complex. The community policing officer used several successful tactics, including citing suspected drug dealers, most of whom were not residents, for trespassing and taking photographs of them. Citizens formed a coalition that met with the community policing officer in her on-site office, thereby increasing the willingness of residents to come forward with information on illegal activities. Some disorderly tenants were evicted. The resident manager estimated that drug dealing at the complex was reduced by 90 percent. Community policing helped to improve relations between police officers and the residents of a shelter run by the Center Against Spouse Abuse. Interviewees said that the shelter had a policy, until about 1992, that police could not enter the property. Residents were distrustful of the police. Some had negative experiences when officers went to their homes to investigate complaints of abuse. For example, residents reported that officers failed to make arrests when injunctions were violated. Since the inception of community policing, interviewees said that officers were more sensitive to victims when they investigated spouse abuse cases. Officers visited the shelter to discuss victims’ rights, and residents were favorably impressed by their openness. The community policing officer in the neighborhood was praised by the shelter director for his responsiveness. On two occasions, he responded quickly to service calls, arresting a trespasser and assisting a suicidal resident. A police official noted that the department was in the early development phase of community policing, attempting to demonstrate a few successful projects that could be used in locations throughout the over 26,000-square-mile reservation. One interviewee said that gang activity was partially a result of teens having nothing to do on the reservation. A community policing project had officers working with youth groups to develop positive activities and encourage participation by organizing a blood drive, sponsoring youth athletic teams, and recruiting young people to help elderly citizens. Another community policing project was the development of a computer database on gang activities and membership. We asked interviewees representing community groups and local government agencies participating in community policing activities to characterize the level of support their organization had for community policing in their neighborhoods. Thirty-two of the 39 interviewees said that they were supportive of their local community policing programs. Seven other interviewees offered no specific response to this question, except to say that they felt it was too early in their implementation of community policing to make assessments. We also asked interviewees representing law enforcement agencies, community groups, and local government agencies what they felt were the major successes and limitations of community policing. Responses on community policing successes emphasized improved relationships between the police and residents and improvements in the quality of life for residents of some neighborhoods. Responses on limitations emphasized that there was not enough funding and that performance by some individual community policing officers was disappointing. Summaries of several responses on the major successes of community policing were the following: “I have seen a big turnaround in some apartment complexes. The entire atmosphere of these places has changed. People are outside. Children are playing. This is due to efforts of community policing officers to get drug buyers and sellers off of the properties.” (A community group representative.) “There have been big-time changes here as a result of community policing. The police have developed a much higher level of trust from public housing residents than existed before. Residents will work with the police now and provide them with information. In this public housing complex, the sense of safety and security has increased. Before the community policing officers were on patrol, residents did not want to walk past the basketball courts into the community center. That is not a problem any longer. The police worked with the Department of Parks and Recreation to improve lighting and redesign a center entrance. We are now offering a well-attended course on computers at the center. People are enjoying the parks. They are even on the tennis courts. Our community policing officer has been successful in working with problem families and the housing authority staff. We provide referrals, counseling, and other resources. We have either helped families address their problems or had them evicted from our units. There are many individual success stories of young people developing better self-esteem and hygiene as a result of interacting with the community policing officer.” (A housing authority director.) “Community policing has changed how we practice law enforcement in a substantial way. We applied community policing strategies to a distressed neighborhood plagued by crime. The area had prostitution and drug dealing, and service calls to the police were high. We worked with residents and landlords to improve the situation. Closer relationships developed, and we began working on crime prevention with community groups, schools, and parents. Property managers provided better lighting for their property, cut their weeds, and screened tenants more carefully.” (A community policing officer.) Summaries of several responses on major limitations to community policing were: “Community policing is working here, but we still have a long way to go. The challenge for the department is to convince the force that community policing is not a fad and is not a select group of officers doing touchy/feely work, but that it is a philosophy for the whole department. I think we need to reengineer the entire police department structure to fully integrate community policing into the community. I don’t believe we have decentralized the department enough. For example, I think detectives should be out in the community with community policing officers, instead of at police headquarters. They should know the people in the areas to which they are assigned.” (A director of public safety.) “We don’t have “Officer Friendly” yet, even though overall attitudes have improved. The concept is good. The limitations are in the individuals doing the work. Some are good. Some are not.” (A community group member.) “Some residents have an unrealistic expectation of what community policing can do and what it cannot do. The majority of calls for service involve social problems. Some residents expect the police to solve all their social problems, such as unemployment and mediating family and neighbor disputes.” (A local government official.) Janet Fong, Senior Evaluator Lisa Shibata, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Justice Office of Community Policing Services (COPS) grant program, focusing on: (1) Justice's implementation of the Community Policing Act with special attention to statutory requirements for implementing the COPS grants; (2) how COPS monitored the use of grants it awarded; (3) the distribution of COPS grants nationwide by population size of jurisdiction served, by type of grant, and by state; (4) how law enforcement agencies used grants under the COPS Making Officer Redeployment Effective (MORE) grant program; (5) the process the COPS office used to calculate the number of officers on the street; and (6) the funding distributions and uses of COPS hiring grants by special law enforcement agencies. GAO noted that: (1) under the Community Policing Act, grants are generally available to any law enforcement agency that can demonstrate a public safety need; demonstrate an inability to address the need without a grant; and, in most instances, contribute a 25-percent match of the federal share of the grant; (2) to achieve the goal of increasing the number of community policing officers, the law required that grants be used to supplement, not supplant, state and local funds; (3) the COPS Office provided limited monitoring of the grants during the period GAO reviewed; however, the office was taking steps to increase its level of monitoring; (4) about 50 percent of the grant funds were awarded to law enforcement agencies serving populations of 150,000 or less, and about 50 percent of the grant funds were awarded to law enforcement agencies serving populations exceeding 150,000, as the Community Policing Act required; (5) about $286 million, or 11 percent of the total grant dollars awarded in fiscal years (FY) 1995 and 1996, were awarded under the MORE grant program; (6) according to the results of a survey GAO did of a representative national sample of those receiving grants under the COPS MORE grant program in FY 1995 and 1996, grantees had spent an estimated $90.1 million, or a little less than one-third of the funds they were awarded; (7) they spent about 61 percent of these funds to hire civilian personnel, about 31 percent to purchase technology or equipment, and about 8 percent on overtime payments for law enforcement officers; (8) the distributions of MORE program grant expenditures were heavily influenced by the expenditures of the New York City Police Department, which spent about one-half of all the MORE program grant funds expended nationwide; (9) to calculate its progress toward achieving the goal of 100,000 new community policing officers on the street as a result of its grants, the COPS Office did telephone surveys of grantees; (10) as of June 1997, the COPS Office estimated that a total of 30,155 law enforcement officer positions funded by COPS grants were on the street; (11) according to the results of GAO's review of COPS Office files, special law enforcement agencies were awarded 329 community policing hiring grants in FY 1995 and 1996--less than 3 percent of the total hiring grants awarded; and (12) special agency grantees applied most frequently to use officers hired with the COPS funds to write strategic plans, work with community groups, and provide community policing training to officers and citizens.
The St. Clair and Detroit Rivers, and Lake St. Clair, provide multiple benefits to residents of Michigan and Ontario, Canada, who use the water bodies as their primary source of drinking water as well as for recreation such as boating and fishing. Sensitive ecological areas located along the corridor include Humbug Marsh, the last Great Lakes coastal marsh on the Michigan mainland of the Detroit River. It contains the greatest diversity of fish species found in the Detroit River and it is part of the migration route for 117 fish and 92 bird species. The Detroit River itself was designated an American Heritage River in 1998 for these ecological resources. Despite these and other benefits, the St. Clair and Detroit Rivers are considered “Areas of Concern” by the U.S. and Canadian governments under the Great Lakes Water Quality Agreement as a result of beneficial-use impairments, such as restrictions on fish consumption. Pollutant discharges to the waters of the corridor include CSOs—caused by heavy rains that force wastewater treatment plants to bypass their overburdened systems and discharge raw or partially treated waste directly into the water bodies. Michigan law requires that wastewater treatment facilities report their combined and sanitary sewer overflows to the Michigan DEQ within 24 hours. Discharges from industrial facilities with NPDES permits account for additional pollutants that enter the waters of the corridor. Industries with NPDES permits are required to report on the quality of all discharges and to detail any pollutants discharged that exceed their permit limits to EPA in monitoring reports at intervals specified in their permits, commonly monthly. As a result, NPDES-permitted industries regularly monitor their discharges. In addition to these requirements, federal law requires that parties that discharge oil or a hazardous substance beyond specified quantities into waters of the corridor report these incidents to the NRC. Spills and other pollutant discharges might also be reported to the NRC by members of the public that observe pollutant materials in waterways. When spills, industrial permit violations, and sewer overflows contain oil, they are visible—and more likely to be reported by observers. In contrast, releases of chemicals into the water are oftentimes not visible, unless they can be detected by their effects, such as fish kills. Figure 2 illustrates these sources of pollution. While EPA has federal regulatory responsibility for NPDES-related discharges and CSOs, EPA and the Coast Guard share responsibility for spill prevention and response on the U.S. side of the corridor. The National Contingency Plan and the Southeast Michigan Area Contingency Plan describe a geographic division of responsibility between these agencies, but due to EPA’s expertise, the Coast Guard may refer chemical spills to EPA even if the spills are in locations otherwise assigned to the Coast Guard. When spills originate on land but impact the navigable waters, both agencies might be involved in response. Within EPA, Chemical Preparedness officials enforce regulations that address chemical release reporting requirements while Oil Program officials coordinate spill response and oil spill prevention inspections. When spills involve industrial permit violations or sewage releases, EPA’s NPDES program officials are also involved—but because EPA approved Michigan’s NPDES program, Michigan officials are more directly involved in these cases. As agencies respond to spills, they work with responsible parties to ensure that they fund the cost of cleanup activities. If EPA and the Coast Guard’s spill responders do not identify the responsible party, however, they may obtain funds from the Oil Spill Liability Trust Fund (Oil Fund) or Superfund to finance their response efforts, including the cleanup. According to officials from the Coast Guard, notification of potentially affected parties is oftentimes a component of the agencies’ spill response efforts. In addition to federal agencies, the Michigan DEQ and State Police also provide spill notification. On the Canadian side of the corridor, the Ontario SAC consolidates spill reports routed to their center, as well as to other agencies. For example, the Ontario Ministry of the Environment has an agreement with Environment Canada under which they receive all spill reports for the federal agency. There are many potential pathways for spill notification in the corridor. The overall process can be divided into spill occurrence and reporting by a responsible party or observer to a designated reporting center; spill reporting from designated spill reporting centers to response agencies; and spill notification from response agencies to stakeholders, including drinking water facilities. Sometimes parts of the process are collapsed; for example, spill reporting centers may notify other stakeholders as well as response agencies. Alternatively, the process can be lengthened if multiple agencies are responsible for notifying other stakeholders in sequence. Agency spill data are not sufficient, for multiple reasons, to accurately determine the actual number or volume of spills in the St. Clair–Detroit River corridor. Many spills go unreported because responsible parties may not understand or comply with reporting requirements. On the other hand, there are oftentimes multiple NRC reports for the same spill, since several observers may report them. EPA Region 5 does not remove all duplicate spill reports from their database, or update its data after investigating spills. In contrast, Coast Guard officials in District 9 document their investigations and use the information to update their spill data, but they do not update spill volume estimates because of automated system limitations. Other events, including CSOs and industrial permit violations, are reported more frequently in the corridor. NRC, EPA, Coast Guard, and Canadian officials believe that many spills are never reported, and therefore that spill data do not represent the true number of spills. Though responsible parties are required by law to immediately report spills in amounts beyond certain minimum quantities, agency officials believe they may not do so for a variety of reasons. U.S. and Canadian officials suggested that responsible parties may not be aware of spills, may not understand the reporting requirements, or that they may not want to receive “bad press” or be forced to pay the costs of the cleanup. Reporting by responsible parties and others is critical because only one water quality monitoring station capable of detecting spills exists in the corridor. The Sarnia-Lambton Environmental Association (SLEA), a Canadian industry consortium, maintains a monitoring station south of the highly industrialized Sarnia area. Though SLEA monitors for a suite of chemicals, it does not detect all types of discharges—and while it shares spill data with the Ontario Ministry of Environment, its purpose is not to collect spill-related information for regulatory agencies; rather, it collects the information as a service to SLEA members, as well as agencies and communities. When spills are reported, in many cases the responsible party is unknown. In many of these instances, a member of the public or party other than the responsible party provides information to the NRC. EPA and the Coast Guard’s spill data indicate that 67 percent and 29 percent, respectively, of reported spills in the corridor were released from an unknown source in the time period we reviewed. Ontario SAC data indicate that 10 percent of Canadian spills were from an unknown source. Another reason spill data do not accurately represent the number of actual spills is that NRC spill data record some spill events multiple times. The NRC received 991 reports of spills in the corridor from 1994 to 2004, but these may include multiple reports of the same spills. NRC officials are responsible for maintaining a call center for obtaining spill information, and relaying the information to the appropriate agencies that are tasked with response. They are not required to assess whether multiple reports pertain to the same spill, as this would require investigation. NRC officials told us that, as a result, many duplicate spill reports exist. Coast Guard officials from District 9 told us that they could, after investigating spill incidents, identify duplicate spill reports provided by NRC, link these duplicate reports to single spill incidents, and provide that information to the NRC so that they can update their records. Duplicate reporting has been addressed by Ontario’s SAC, which obtained 157 reports of spills on the Canadian side of the corridor during the same time period. Unlike the NRC, the Ontario SAC determines whether each spill report is unique when it records its information. The Ontario SAC is staffed by Ontario Ministry of Environment officials who are responsible both for obtaining preliminary spill information for the province, as well as for determining which spill reports relate to the same incident. The Ontario SAC has a rolling summary of spill incidents on a display screen and on staff computers, which allows them to identify multiple reports that relate to a common incident. (See fig. 3.) The Ontario SAC’s Emergency Management Coordinator told us that when these safeguards fail to eliminate a duplicative spill report, subsequent corrections are made. To develop a process similar to Ontario’s for the U.S. side of the corridor, Michigan State Police officials told us that between 1986 and 1988 state officials explored the option of creating a spills center. At the time, they estimated that it would have cost $2 million to operate and it would have required 10 staff, including a chemical specialist and three shifts of phone operators. This was viewed as prohibitively expensive by Michigan officials, and as an alternative, the State Police and the Michigan DEQ’s Pollution Emergency Alerting System (PEAS) began operating as a spill notification system. The PEAS system is used for reporting spills to the Michigan DEQ during non-business hours, including holidays, weekends, and evenings. Spill data from PEAS, however, are similar to NRC data in that they include multiple entries for single spills because each call is logged, rather than each unique spill event recorded. Unlike the NRC, response agencies such as EPA are required to assess each reported spill and therefore should have reliable spill information, but this is not the case. EPA Region 5 does not eliminate all duplicate spill reports because they do not respond on-site to the majority of spills for which they receive reports. EPA Region 5 officials told us that they rely on Michigan DEQ to respond to the majority of spills since they are in closer proximity. Region 5 officials respond to spills upon receiving a request for assistance from Michigan DEQ officials, and when spills are over 1,000 gallons, EPA officials respond to provide assistance even if they are not requested to do so. They told us that they investigate very few spills on- site—perhaps roughly one percent of spills—due to limited staff resources. Instead, EPA Region 5 officials follow up with state spill responders by phone to obtain more detail on spills. Though their operating protocols state that responders are to complete pollution reports and update spill data after investigation, EPA Region 5 officials told us that responders have not done so typically because they fail to make it a priority. For this reason, EPA officials were unable to tell us which spills in the corridor in our time frame were investigated by their agency. They told us that EPA imports spill data from the NRC and does not make modifications to the data; therefore, EPA’s spill data set is of limited use. EPA Region 5 officials providing spill response in the corridor began using a new Web-based spill data system, Web Emergency Operations Center (Web EOC), in the fall of 2004. EPA officials are hopeful that spill responders will update spill information in the system following their investigations; however, they said that it is too soon to tell. Like EPA, Coast Guard officials from District 9 told us that they assess and investigate each spill, whether they go on-site or use phone calls and other means to obtain information; however, Coast Guard officials update spill information following these investigations. While the Coast Guard’s spill data sets included information on spill materials, the cause of spills, and how each spill was resolved, the formatting of the data sets makes it difficult to access accurate information on spill volumes. For example, a spill listed in the Coast Guard’s data set as being a 2,000-gallon spill is also reported in the Coast Guard’s annual report as being over 8 million gallons. Similarly, the Coast Guard’s spill data set contains a reference to a 75-gallon oil spill, but summaries written by the Coast Guard’s District 9 responders to the spill state that over 66,000 gallons of oil were recovered. When asked about the discrepancies in these cases and others, Coast Guard officials from District 9 told us that they are unable to update the field in their database that contains preliminary volume estimates. Instead, they update volume information in narrative fields. As a result, it is difficult to assess the severity of any given spill in the Coast Guard’s data sets. The number of reported spills is exceeded by other types of events, such as CSOs and industrial permit violations that are reported more frequently in the corridor. EPA’s data on U.S. industrial permit violations indicate that approximately 2,200 were reported in the corridor during the 11-year time period we reviewed; over 1,800 were greater than 50 gallons (or of an unknown volume). Michigan DEQ has tracked CSOs on the U.S. side of the corridor since 1999. Their data indicate that roughly 1,400 CSOs were reported in the corridor from 1999 to 2004. These data might be subject to the same limitations as the spill data because industrial permit violations and CSOs are self-reported and facilities may not report all of these events. However, spills may be particularly subject to underreporting because they are not part of a structured program—as CSOs and industrial permit violations are. Figure 4 illustrates the relative percentages of spills, industrial permit violations, and CSOs of greater than 50 gallons (or of an unknown volume) that were reported in the corridor in the 6-year period between 1999 and 2004, the time period for which CSO data were available. Typically, CSOs in the corridor contain biological waste, commercial and industrial waste, and storm water runoff from streets and other surfaces. In the Detroit area, however, CSOs are more likely to contain industrial waste in concentrations that have the potential to negatively impact water quality to a greater extent. In addition to sewage from 3 million area customers and 78 municipalities that send their waste to the Detroit plant, the wastewater treatment facility treats industrial waste from over 250 major industries. The facility has approximately 80 outfalls and is one of the largest wastewater treatment plants in the world. While the facility has an industrial pretreatment program that requires that industries’ waste meets certain limits before treatment, these limits may be relatively lenient, according to EPA officials, resulting in high volumes of waste flowing into the facility. For example, EPA officials told us that the facility has lenient oil and grease pretreatment limits. In the event of a CSO, the pretreated material that bypasses the Detroit wastewater treatment facility and is discharged into the Detroit and Rouge Rivers may contain industrial waste, including oil, grease, and other materials. The Detroit facility has historically had difficulties complying with permit requirements. To address these deficiencies, EPA filed suit against the Detroit facility in the 1970s and the resulting consent decree has, according to EPA officials, provided a basis for many required changes to improve their facility. However, a lawsuit filed by EPA in the 1980s which related primarily to the facility’s industrial pretreatment program was dismissed in federal court. Spill notification may involve the following: (1) spill occurrence and reporting by a responsible party or observer to a designated reporting center or a response agency; (2) spill notification from response agencies to one another; and (3) spill notification by response agencies to drinking water facilities and other stakeholders. Spill notification between the United States and Canada is outlined in two agreements. The coast guards of each country and officials from the Michigan State Police and Ontario SAC have agreed to notify one another of spills; however, these two agreements are not explicit about which spills warrant notification or how quickly notification should occur. We reviewed six selected spill incidents to gain insight into the spill notification process from initial reporting to drinking water facility notification. Drinking water facility operators on the U.S. side of the corridor had differing perspectives on current notification processes, but the majority expressed concern that their facilities could be contaminated by spills due to untimely notification. Finally, efforts have been made to develop informal notification processes between individual industries or trade associations and drinking water facilities. There are several potential pathways through which spill notification may occur in the corridor. The overall process can be divided into spill occurrence and reporting by a responsible party or observer to a designated reporting center or response agency; spill reporting from designated spill reporting centers to response agencies; spill notification from response agencies to other response agencies; and notification to stakeholders, including drinking water facilities. Sometimes parts of the process are collapsed; for example, spill reporting centers may notify other stakeholders as well as response agencies. Alternatively, the process may be lengthened if multiple agencies are responsible for notifying other stakeholders in sequence (see fig. 5). The Canada–United States Joint Marine Pollution Contingency Plan states that on-scene coordinators (OSC) from the U.S. and Canadian Coast Guards may notify each other of spills when there is a substantial threat of the spreading of pollutants across shared boundaries, including the St. Clair–Detroit River corridor and other waters of the Great Lakes. The plan arises from the Great Lakes Water Quality Agreement between Canada and the United States that calls for development of a joint contingency plan for use in the event or threat of a spill involving oil or a hazardous substance. The notification called for in the plan is conducted by phone between the two coast guards. They also provide warning messages to each other when they are uncertain as to whether a spill will impact the other’s waters; when a joint response is needed to address a spill, they call or communicate via fax. Officials from the Coast Guard told us the plan has only been utilized for joint response twice since 1994. While spill-related warnings have not been systematically tracked between the U.S. and Canadian Coast Guards, officials from the U.S. Coast Guard told us they are starting to track the warning messages to and from Canada. Though U.S. Coast Guard officials may notify Canadian Coast Guard officials of spills, there is no guidance or directive for either party to notify local stakeholders, such as drinking water facilities; however, they told us that they sometimes do so as a courtesy. Though the U.S. and Canadian Coast Guards have had a spill notification process in place since 1978, Michigan and Ontario officials believed that another notification process was necessary at the state and provincial level to expedite notification of stakeholders such as drinking water facilities. To address this need, the State of Michigan and Province of Ontario agreed in 1988 to contact one another by phone if an unanticipated or accidental discharge of pollutants occurred and the discharge was likely to adversely affect the adjoining jurisdiction or drinking water supply. Michigan State Police were designated as the authority responsible for this task by the state governor because they have the capability to receive information on a 24-hour basis, 7 days a week. According to Michigan State Police officials, this notification process was intended to provide immediate spill-related information to state authorities, who in turn could provide that information to stakeholders such as drinking water facilities. These officials told us that they believe that duplication in notification efforts at the federal and state levels is beneficial, because stakeholders at all levels are more likely to obtain information if multiple processes are involved, since any one system might fail. The responsibility for communicating spill information to the public generally resides with state and local authorities, who are presumed to be the first agencies on the scene. This responsibility was established in the Emergency Planning and Community Right-to-Know Act (EPCRA) of 1986, which requires states to establish an emergency planning and notification system. This system includes local emergency planning commissions, which are charged with creating procedures for receiving and responding to public requests for information. However, there is no proactive notification requirement in the act for the local planning commissions. Neither the Joint Marine Pollution Contingency Plan nor the Ontario– Michigan Joint Notification Plan contains explicit requirements for what types of spills warrant notification or how quickly notification must be given. For example, Ontario Ministry of Environment officials told us that they classify some sewer overflows as spills. These include sewage bypasses caused by equipment failure, power outages, and maintenance shutdowns. Michigan officials, on the other hand, do not consider these events to be spills because they are regulated separately. U.S. Coast Guard officials said they do not regularly provide information about sewer overflows to Canadian officials, since they are not required to do so, these events occur too frequently, and it would not be feasible to relay information on each occurrence in the corridor. Even when Ontario and Michigan officials agree on what type of event is considered a spill, they told us that they do not have a common understanding of what magnitude of spill requires notification. According to Michigan officials, the agreement does not specify spill volumes that trigger notification, because the agreement’s authors were more concerned with spill-material toxicity. Michigan and Ontario officials told us that they have tried to better define when notification is required, but they are frustrated because they have not yet reached consensus on the issue. For example, Michigan officials independently explored the idea of notifying Ontario officials of spills only when spills exceeded 1,000 gallons. Ontario officials, upon learning of this limit, did not agree. They thought this figure was too high and also indicated that volume alone is not an adequate measure of potential impact. In their opinion, other factors such as toxicity and concentration also need to be considered. Since two large chemical spills occurred at Sarnia industrial facilities in 2003 and 2004, Ontario officials told us they have notified Michigan officials of spills of various sizes but have not always been informed of large U.S. spills by Michigan authorities. Ontario officials provided some examples of when the Province learned of spills originating in Michigan and impacting Ontario through calls to the SAC from fisherman and other local stakeholders. Michigan State Police officials told us they are uncertain as to whether they are notified of all Canadian spills. These officials have not tracked spill notification to and from Ontario; however, they told us they intend to start doing so. Though the Ontario–Michigan spill notification agreement specifies that notification is to be immediate for those spills likely to adversely affect the adjoining jurisdiction, officials on both sides told us that they are not always notified in a timely manner. Michigan DEQ officials told us that the greatest lag in the notification process is the time between when a spill occurs and when it is reported by a responsible party to agency officials. Ontario officials told us that they are not always able to notify immediately because some assessment is often required to determine if there is any likelihood of an impact on the U.S. side. Ontario officials also told us that the number of parties or steps involved in the Michigan notification process is greater than those involved in their process, and this could contribute to delays in Michigan’s spill notification. A local official from a county bordering Lake St. Clair also told us that the process employed by Michigan State Police and Michigan DEQ officials to notify stakeholders has too many steps, and drinking water facilities are too far down on their list for timely notification. Two local officials told us that Michigan’s spill notification process should include electronic communication, rather than relying exclusively on a phone tree, since this provides too many opportunities for communication to be disrupted. Spill notification varies from spill to spill, depending on the unique circumstances of the incident. We selected six spill cases to illustrate the various ways that spill notification can occur. These six cases were chosen to maximize variability among several factors including country of origin, spill material, and whether the responsible party was known. (See table 1.) In three of the six cases we reviewed, the public, rather than the responsible party, was the first source of spill information to response agencies. In one of these cases, the responsible party later provided the approximate time that the incident occurred and therefore we could calculate the time between spill occurrence and reporting, which was about 24 hours. Notification of agency officials and then drinking water facilities occurred most quickly when the responsible party reported the spill within 2.5 hours of its occurrence. In February and May of 2004, a spill of methyl ethyl ketone and then an oily water spill occurred in Ontario; these entered the St. Clair River. For these spills, Ontario officials notified Michigan officials within 1 to 2 hours of the spill being reported. Michigan drinking water facilities were then informed of the spill by Michigan officials within the next 1 to 2.5 hours. For these incidents, notification took less than 5 hours, from spill occurrence to notification of drinking water facilities. When responsible parties did not promptly report the spill, the notification process took 2 days or more. For two chemical spills that we reviewed, including an ethylene and propylene glycol spill in Michigan and a vinyl chloride monomer spill in Ontario, the responsible party failed to notify regulatory officials until several days after the spill occurred. The Canadian spill was not detected by the responsible party, because their monitoring equipment was not running as a result of a power outage. The U.S. spill was not detected until a member of the public observed fish dying and reported it to Michigan DEQ officials; the responsible party failed to notify state officials of the spill. In addition, our review of six selected cases illustrated that in five cases, agencies notified one another per the notification agreements. In the case in which they did not, Michigan officials determined that there was no potential impact to Canadian waters. Finally, for the six spills we reviewed, drinking water facilities were not notified in three instances. In these cases, agency officials determined that it was unnecessary to notify the facilities because, in their view, the facilities would not be affected or the information was deemed too late to be useful. Figure 6 shows the notification milestones for the six spills we profiled. Drinking water facility opinions varied—by location along the corridor— about the timeliness of spill notification. While nearly all drinking water facility operators with facilities along the St. Clair River and northern half of Lake St. Clair told us that spill notification was not timely, almost all facility operators with facilities along the lower half of Lake St. Clair and the Detroit River told us that notification was timely. These facility operators indicated that proximity to spill locations makes a difference in their definition of notification timeliness because they might have more or less time to prepare for spill material to pass their intakes. Figure 7 illustrates the location of U.S. drinking water facilities in the corridor. Despite the difference of opinion on notification timeliness, the majority of the 17 drinking water facility operators all along the corridor told us they would like to be notified of a spill immediately, or within 1 hour or less of its occurrence. In the six spills we profiled, notification never occurred in this time frame. Furthermore, many Michigan drinking water facility operators along the corridor expressed concern that their facilities could be contaminated by spills. Some cited factors that could increase the likelihood of facility contamination, such as vessel traffic along the corridor or the number of industries located along the corridor. They told us that spill notification plays a key role in whether their facilities might be contaminated. Some told us that spill notification is the most important factor in their ability to protect the drinking water. Two facility operators also indicated that their customers have expressed concerns about the safety of their drinking water. Generally, facility operators located along the St. Clair River and the top of Lake St. Clair seemed to express greater concern than facility operators located along the southern part of Lake St. Clair and the Detroit River. For example, a facility operator in the northern part of the corridor told us that he believes drinking water facility contamination due to spills is “a matter of when, not if.” However, Michigan DEQ officials told us that several factors make it unlikely that spills in the St. Clair River will contaminate drinking water: Drinking water intakes are 20–30 feet below the water’s surface. The river has distinct channels, and it is difficult for a pollutant originating on one side of the river to cross these channels. At 180,000 to 200,000 cubic feet per second, the river flows so quickly that pollutants are flushed downstream before they affect drinking water. In contrast, Michigan DEQ officials told us that Canadian drinking water facilities are more vulnerable to contamination from spills in the St. Clair River. These officials noted that Canadian drinking water facilities have shut down more often than Michigan facilities as a result of spills in the corridor. They noted that the most vulnerable Canadian drinking water facility is located on Walpole Island, directly downstream of Sarnia, and it provides drinking water to members of a First Nation community. Currently there are, or will soon be, efforts under way to supplement the existing spill notification processes employed by the U.S. and Canadian Coast Guards, and Michigan and Ontario officials. Informal notification processes are already being employed along the corridor. For example, a local emergency management coordinator in the St. Clair River area of the corridor has developed an informal agreement with Canadian industry representatives to call and notify him directly of any spills into the St. Clair River. Upon receiving spill information, he provides the information directly to drinking water facility operators along the portion of the corridor that borders the St. Clair River. Three drinking water facility operators listed him as their first source of spill information. In addition, Sarnia-Lambton Environmental Association (SLEA) officials told us that their member facilities contact Michigan drinking water facilities directly in the event of a spill. Several drinking water facility operators confirmed that they have received notification from members of this consortium of Canadian industries. In addition, two monitoring systems are being developed by officials from counties bordering the corridor and Michigan DEQ officials, who have obtained federal grants to install spill detection equipment in the St. Clair and Detroit Rivers. These systems are designed to provide spill information directly to drinking water facility operators with water monitoring equipment located near their intakes. One monitoring system, for the St. Clair River and Lake St. Clair, is funded by an EPA grant of $962,200 to Macomb and St. Clair Counties. The other monitoring system, for the Detroit River, is funded by a DHS grant of $760,000 to Michigan DEQ. The officials involved in obtaining both grants told us they are coordinating their efforts so that an overall network of water quality monitors will be more seamless along the corridor. For example, they plan to purchase the same monitoring equipment so that maintenance can be shared. EPA and Michigan DEQ officials estimate that the monitoring systems will be in place in the St. Clair and Detroit Rivers no later than 2007 (see fig. 8). These systems are based on the Ohio River Valley Sanitation Commission’s (ORSANCO) spill detection and notification system, established in 1978 to protect drinking water intakes from chemical contamination. For additional information on this system, see appendix VI. EPA’s spill prevention program addresses only oil spills, and EPA is uncertain as to which facilities are governed by its spill prevention requirements. EPA Region 5 conducted varying numbers of spill prevention-related inspections per year in the corridor for the time frame we reviewed, and their inspections uncovered significant spill prevention deficiencies. In contrast, the Coast Guard’s spill prevention efforts include oil and hazardous substances. The Coast Guard’s District 9 inspections targeted a greater number of the facilities and vessels they regulate; however, the Coast Guard’s inspections were multi-mission rather than focused on spill prevention exclusively. Their inspections revealed minor spill prevention-related issues. In response to spills and noncompliance issues, EPA and the Coast Guard issued a total of 16 penalties in the time period we reviewed. While EPA has the authority to address spill prevention for both oil and hazardous substances, its program only addresses oil. In 1972, in amendments to the Clean Water Act, Congress called for regulations to prevent discharges of oil and hazardous substances; in 1974, EPA’s SPCC program became effective. EPA’s regulations require non-transportation related facilities with specified oil storage capacities which, because of their location, could reasonably be expected to discharge oil into the navigable waters, to implement a SPCC plan that has been certified by a licensed engineer. These plans should identify the location and types of stored oil, discharge prevention measures, drainage controls, and methods of disposal. Facilities must also meet certain operational standards that include having necessary containment structures or equipment; periodic integrity tests of containers and leak tests of valves and piping; training for oil-handling personnel on equipment operation and maintenance, discharge procedure protocols, pollution control laws and rules, facility operations, and the contents of their facility’s SPCC plan. In the late 1970s, EPA proposed hazardous substance spill prevention regulations, but they were never finalized. EPA officials speculated that, possibly, these regulations were not finalized because oil spills were more prevalent, hazardous substance spills have shorter-term effects than oil spills, and because EPA focused on the NPDES program to control chronic pollutant discharges. While EPA’s spill prevention program targets oil spills, the Coast Guard’s program addresses spill prevention for both oil and hazardous substances. The program applies to facilities or vessels that are capable of transferring oil or hazardous materials, in bulk, to or from vessels of certain minimum capacity. Facilities are required to develop an operations manual, employ qualified personnel, and meet equipment standards. The operations manual must contain a description of the facility layout, the location of important equipment and personnel, and a discussion of procedures for transfer operations and emergencies. The manual must also include a summary of applicable laws and information concerning personnel training and qualifications. Also, each facility must have emergency shutdown capacity and specified discharge containment features. Vessels are required to have written transfer procedures for oil and hazardous substances, meet maintenance and equipment standards, and employ qualified personnel. In addition to the Coast Guard and EPA’s prevention programs, the Michigan DEQ has a spill prevention program that is administered in conjunction with their NPDES program. This program requires that facilities that store or use oil or polluting substances, or those that may be deemed a hazard to waters of the state, create and implement spill prevention plans and inform Michigan DEQ of the plan’s completion and availability upon request. Michigan DEQ’s pollution prevention plans are to include a detailed facility plan, including floor drains and loading areas; secondary storage container description; and discussion of precipitation management. The plans are also to include spill control and cleanup procedures and are required to be reevaluated every 3 years (or whenever a material release occurs). If a facility is also subject to EPA’s SPCC program, it may submit a combination spill prevention plan that meets both state and federal requirements. If the facility is only subject to the Michigan DEQ’s spill prevention planning requirements, it is not required to have its plans certified by an engineer. On the Canadian side of the St. Clair–Detroit River Corridor, the Ontario Ministry of Environment did not have spill prevention regulations in place in the time frame we reviewed. Instead, the Ministry issued orders which required individual companies to conduct spill prevention planning, or it required spill prevention planning as a requirement for companies seeking a Certificate of Approval, which is required before operating. Due in part to the large chemical spills in 2003 and 2004 originating from facilities in Sarnia, the Ontario Ministry of Environment introduced new legislation under its Environmental Protection Act which addresses the requirement for spill prevention planning. EPA Region 5 does not know the universe of facilities that are subject to its spill prevention program requirements and it conducts varying numbers of inspections of known facilities under its jurisdiction in the corridor. Facilities that must comply with SPCC regulations are not required to report to the agency, so EPA does not have an inventory of facilities it regulates. The challenge this presents is not limited to the corridor, as EPA officials are uncertain as to how many facilities should comply with SPCC program requirements nationwide. In the corridor, EPA Region 5 has identified 59 facilities (of a greater number) that are required to meet SPCC requirements, either through special multi-media inspection initiatives or by referrals from Michigan DEQ. While SPCC plans must be reevaluated and reviewed every 5 years, a specified inspection frequency is not contained in EPA’s regulations. EPA officials in Region 5, which encompasses the corridor, rely on roughly three SPCC inspectors to conduct all plan reviews and provide all compliance assistance for facilities in the six-state region. According to these officials, with current SPCC resource constraints, they could only inspect facilities once every 500 years or more. From 1994 to 2004, EPA Region 5 inspected an average of 10 percent of the 59 known SPCC- regulated facilities in the corridor per year. (See fig. 9.) EPA Region 5 inspected a number of these SPCC-regulated facilities as part of several multi-media inspection efforts conducted by their Enforcement and Compliance Assistance Team, including the Detroit River and Flyway Enforcement and Compliance Assistance Initiative. This effort identified and inspected 28 facilities in the Detroit area for compliance with multiple EPA programs, including the SPCC program; some of the inspected facilities overlap with a portion of the facilities along the corridor. When SPCC program officials inspect a facility they use a standardized approach, which includes the following: an in-depth review of the facility’s SPCC plan; an interview with the facility owner or operator; a physical inspection of the facility; a verification of equipment, containment structures, and buildings; a review of facility inspections and training records; security and integrity verification that the facility’s SPCC plan has been certified by a licensed engineer. While EPA has a separate program for spill prevention, the Coast Guard addresses spill prevention during its routine safety and security inspections of facilities and vessels. The Coast Guard’s District 9 regulates over 100 facilities and 23 vessels stationed in the corridor, as well as vessels that travel through the corridor. It also regulates the transfer of oil and hazardous substances. The Coast Guard inspects facilities and vessels to a much greater extent per year than EPA; however, its inspections are multi-purpose rather than focused exclusively on spill prevention. The Coast Guard’s annual facility inspections incorporate spill prevention components that are similar to EPA’s SPCC inspection components, but their material transfer inspections and spot checks are not comparable to EPA’s focused spill prevention inspections. From 1994 to 2004, the Coast Guard’s District 9 inspected an average of 44 facilities, 135 vessels, and 30 material transfer events per year, for safety, security, and pollution prevention requirements. When material transfer events are excluded, the Coast Guard inspected, on average, about 44 percent of the facilities in their jurisdiction per year—compared to EPA’s inspections of roughly 10 percent of the known SPCC-regulated facilities. However, we are uncertain as to how many of the Coast Guard’s yearly inspections were on- site inspections that are comparable to EPA’s SPCC inspections as opposed to spot checks or other multi-purpose inspections that the Coast Guard conducts. The Coast Guard conducts regular on-site inspections that consist of a check of maintenance and operation procedures, as well as a facility or vessel’s spill prevention planning. For their annual facility inspections, Coast Guard officials review, among other items: contents of operations manuals, including specifications for containment transfer equipment requirements, including an examination of transfer pipes for defects; and facility operations, including whether the designated person in charge has certification of completion of required training. While some inspections are conducted on-site, the Coast Guard also conducts remote examinations, such as viewing a transfer of materials from a distance using binoculars. The specific type and number of inspections conducted by the Coast Guard from 1994 through 2004 is shown in table 2. In addition to EPA and the Coast Guard, the Michigan DEQ inspects facilities for compliance with its spill prevention program during its regular NPDES program inspections. According to Michigan DEQ officials, their inspectors do not keep track of the number of spill prevention inspections conducted or deficiencies found due to a lack of funding for its spill prevention program. Further, the universe of facilities regulated by its spill prevention program is unknown—but approximately 400 facilities that are in Michigan DEQ’s Southeast District, which is a larger area that includes the U.S. side of the corridor—have submitted certified spill prevention plans. On the Canadian side of the corridor, Ontario’s Ministry of Environment conducts inspections that include a spill prevention component. Ministry of Environment officials were able to provide inspection data from 2003 to 2005, which indicated that they inspected roughly 35 petrochemical and related facilities per year. The inspections conducted in 2004 and 2005 reflect the work that an “Environmental SWAT Team” conducted. The focus of this special initiative was on facilities with the potential for future spills that could pose risks to human health and the environment. The inspections included a comprehensive review of the facilities’ air emissions, water discharges, and spill prevention and contingency plans. EPA Region 5 officials told us that the spill prevention inspections they conducted from 1994 through 2004 disclosed significant and numerous deficiencies, such as failure to provide for secondary containment or failure to prepare spill prevention plans. For example, an SPCC inspector found that one company failed to prepare its SPCC plan within 6 months of beginning operations, and failed to implement its plan within 1 year. In addition, the facility never had its SPCC plan certified by a professional engineer. The SPCC inspector found that another facility had no additional containment around some bulk storage tanks, and it failed to amend its SPCC plan as required. In contrast, Coast Guard officials from District 9 told us that their inspections revealed that nearly all facilities and vessels were in compliance, and those that were not had only minor noncompliance issues related to spill prevention, such as incidental omissions in operations manuals. For example, Coast Guard officials found instances in which a facility operator initialed only one section of a required form, rather than at multiple sections. The Coast Guard officials also found other minor violations that related to aged hoses outside their service life and inadequate lighting. Michigan DEQ officials told us that their inspections revealed that some facilities do not have spill prevention plans, or did not certify compliance with them. In some cases, facilities that already had some secondary containment or protection in place needed further upgrades in order to come into compliance with the state’s spill prevention requirements. For example, during one inspection, Michigan DEQ found that a facility had developed a spill prevention plan, but it was not adequately managing its materials in order to prevent storm water from contacting the materials and discharging them into the waterways. On the Canadian side, Ontario’s Environmental SWAT Team found that 34 of the 35 facilities inspected in 2004 and 2005 were not in compliance with one or more legislative and regulatory requirements. Eight facilities did not have spill prevention and contingency plans. Other deficiencies found during inspections included: a Certificate of Approval was not obtained for operations; equipment, systems, processes, or structures were altered contrary to the existing Certificate of Approval; and chemicals were improperly handled, stored, and identified. EPA and the Coast Guard issued 16 penalties in response to spills, noncompliance with spill prevention programs, or for failure to report spills during the period we reviewed. EPA Region 5 issued four penalties, primarily for SPCC violations, that were assessed at an average of $39,000 each during the 11-year period. During the same time period, the Coast Guard’s District 9 issued 12 penalties for spills that were assessed at an average of $2,100 each. See figure 10 for total amounts of penalties assessed by EPA and the Coast Guard per year from 1994 through 2004. EPA Region 5 officials told us they rely primarily on assisting companies in coming into compliance with spill prevention program regulations, and they pursue enforcement actions and issue financial penalties when companies fail to respond to their assistance. They also explained that limited staff resources are available to pursue enforcement actions. EPA Region 5 has the equivalent of roughly one and a half full time staff persons devoted to spill-related enforcement duties conducted by the Office of Regional Counsel and the Oil Program. The Oil Program is responsible for determining noncompliance with the SPCC program, which entails establishing a history of spills or noncompliance; it is then responsible for determining penalty amounts. Determining noncompliance and identifying responsible parties for spills, however, can be problematic for EPA. While EPA Region 5 officials told us they use their spill data when pursuing an enforcement case, the agency, for most spills, does not confirm the validity of the spill data or gather additional information. Michigan DEQ responds to most reported spills in the corridor, but EPA Region 5 does not coordinate with Michigan DEQ to collect information for enforcement purposes. It is EPA’s policy to collect spill information directly because, according to EPA officials, it is preferable to have first-hand knowledge in the event that staff have to testify or provide a deposition for an enforcement case, among other reasons. To gather additional information, Oil Program officials stated that they send requests to facilities for spill-related information that they can then use in enforcement cases. If spill information is obtained, EPA Region 5 officials told us that their informal policy is not to pursue an enforcement case when the proposed penalty is less than $11,000 or the spill involves less than 100 gallons or two barrels of oil. They also stated that in most cases for which they issued a penalty, the amounts were ultimately reduced substantially through negotiations with the responsible party. For example, one facility was issued a financial penalty of approximately $320,000 for a spill prevention violation and the negotiated final payment was $25,000. Like EPA, the Coast Guard relies primarily on assisting companies in coming into compliance with spill prevention program regulations. The average financial penalties that were assessed by the Coast Guard’s District 9 were relatively low compared to the maximum financial penalty of $32,500 that it has the authority to issue for a spill violation. Coast Guard officials from District 9 told us that, in determining penalties, they take into account how much a facility or vessel owner has already paid for the cost of cleaning up a spill. In regard to why no penalties were assessed for spill prevention program violations, officials from the Coast Guard stated that they have the authority to order a facility or vessel to cease operations if it does not comply with their spill prevention program, and this serves as a strong deterrent to noncompliance. They added that large financial penalties have not been needed due to the cooperation of companies coming into compliance with their prevention regulations. Similar to the Coast Guard, Michigan DEQ did not issue penalties for noncompliance with their spill prevention program in the time frame we reviewed. Michigan DEQ did, however, issue four penalties averaging $35,000 to responsible parties for spills in the corridor. Michigan DEQ also issued three penalties for multiple violations including spills and industrial permit violations; the penalties totaled approximately $300,000. Lastly, in Ontario, the Ministry of Environment did not have the authority to issue administrative penalties for spills until 2005. Prior to that time, the Ministry of Environment pursued spill-related penalties through the provincial court system and from 2002 to 2004, four facilities were assessed penalties averaging approximately $171,000 in U.S. dollars after successful prosecutions. In addition, though EPA and Coast Guard officials acknowledge that spill reporting is not always occurring, the agencies did not penalize responsible parties for failure to report chemical releases in the time period we reviewed. We are, however, uncertain as to how many chemical releases occurred in the corridor due to data limitations. The authority provided under the EPCRA allows EPA officials to penalize regulated industries for failure to report, in a timely fashion, spills of reportable quantities of hazardous or extremely hazardous substances. However, EPA Region 5’s Chemical Preparedness officials who administer EPCRA told us that they rely on 30 to 40 information requests sent to companies per year for the entire six-state region to gather the necessary information on reporting in order to pursue enforcement. They stated that, with three staff for the region to enforce the EPCRA and other regulations related to chemical releases, they lack the resources to inspect more than roughly 15 facilities per year to determine compliance with timely notification and other reporting requirements. They did not issue any penalties for failure to report chemical releases in the corridor in the time frame we reviewed. Superfund authorizes EPA and the Coast Guard to issue penalties for failure to report hazardous substance spills, but neither agency did so in the time frame we reviewed. The Clean Water Act, on the other hand, does not authorize civil penalties for a responsible party’s failure to notify NRC of an oil spill; only criminal sanctions are available. Michigan DEQ officials may make a criminal complaint or request that the state’s Attorney General pursue civil action for failure to report spills; however, they did not in the period we reviewed. On the Canadian side of the corridor, Ontario’s Environmental Protection Act provides Ministry of Environment officials with the authority to penalize responsible parties for failure to report spills, as this is a violation of the act. The Ministry did so recently, and the company responsible for a large chemical spill into the St. Clair River in 2003 was charged and convicted of failing to report the spill immediately. Spill-related penalties that are collected by EPA and the Coast Guard help supplement the funds that provide for response efforts when a responsible party is not identified to pay costs. In these cases, EPA and the Coast Guard may obtain financing from the Oil Fund or Superfund to pay for their response efforts, including the cleanup. But these funds are being depleted by cleanup efforts in the corridor and are not being replenished through the cost recovery process because, in many cases, the responsible parties have not been identified. Fund data maintained by the National Pollution Funds Center show that, from 1994 to 2004, approximately $8.4 million from the Oil Fund financed oil spill cleanups in the corridor, for which $80,067 was recovered to offset those expenditures, and approximately $17,000 from Superfund financed hazardous material spill cleanups, with no additional funds recovered to offset those expenditures. Spills of oil and hazardous substances into waters of the corridor continue to be a concern, and agencies responsible for addressing this problem face challenges on several fronts including obtaining accurate spill information, and conducting spill notification and comprehensive prevention efforts. Officials from EPA Region 5 and Coast Guard’s District 9 concur that accurate spill information is not available and acknowledge that such information could be improved by better incorporating data, including final spill volume estimates, obtained through their spill response efforts. Coast Guard officials from District 9 also acknowledge that they could help update the NRC’s spill information by documenting which duplicative NRC spill reports are linked to common incidents. Better documentation of response efforts and the results of spill investigations could assist EPA and the Coast Guard in targeting inspection and enforcement efforts to the highest-priority need. Spill notification under the agreement between the United States and Canada, and Michigan and Ontario, while limited, appears to be meeting its intended purpose. Effective spill prevention helps reduce contaminants flowing into waters of the corridor. EPA’s spill prevention efforts are hampered by the fact that it does not know the universe of facilities regulated by its program, the scope of its program is focused only on oil, and limited resources are available for implementing the program—particularly for inspections. Given the focus and resource limitations which impact EPA’s ability to pursue spill prevention and enforcement activities, EPA could collect information about the facilities that are regulated in order to better define goals for the frequency and extensiveness of their inspections. To better ensure that spill data are available to target their inspection and enforcement efforts, and to improve the overall effectiveness of spill notification, we are recommending that the EPA Administrator direct EPA Region 5; and that the Secretary of Homeland Security direct the Commandant of the Coast Guard and the Commander of District 9 to take the following two actions: maintain and update spill information to include the results of investigations and explore the feasibility of updating spill information maintained by the NRC, and determine whether existing spill notification processes can be improved or modified to provide reduced and consistent notification time frames. In addition, to better utilize spill prevention resources, we recommend that the EPA Administrator consider gathering information on which facilities are regulated under its spill prevention program. We also recommend that the EPA Administrator direct Region 5 to develop goals for the frequency and extensiveness of its inspections. GAO provided a draft of this report to EPA and DHS for review and comment. DHS provided comments on the draft report and generally agreed with our findings and conclusions. EPA provided only technical comments regarding the report. DHS did not address our recommendations. EPA commented on the feasibility of our recommendations regarding gathering information on SPCC regulated facilities and updating spill information maintained by the NRC. While DHS generally agreed with our findings and conclusions, the agency commented on our observation that the Coast Guard does not update spill volume estimates in its automated spill data system. Specifically, the agency cited an example used in our report and noted that it was the result of unusual circumstances that arose during the transition from one data system to another. In addition, DHS noted that Coast Guard investigators do have the ability to update spill volume estimates in investigative report narratives. We acknowledge that the Coast Guard transitioned from one data system to another in the time frame we reviewed. However, Coast Guard officials told us that with the current data system, initial spill volume estimates cannot be readily updated, except in the narrative of the investigation reports. We acknowledge that investigators do have the ability to update volumes in the report narratives but initial volume estimates cannot be changed in the volume data field of the current system. It is difficult to readily assess the magnitude of spills based on the initial volume estimates contained in the automated spill data. Furthermore, Coast Guard officials told us that they would benefit from an additional field in their spill data system which incorporated final spill volume estimates. In addition, DHS commented that additional factors should be considered in our report regarding agency efforts to penalize responsible parties for failure to report spills. DHS acknowledged that the responsible parties for many spills are not identified and, while penalties were not assessed for failure to report spills in the corridor in the time frame we reviewed, DHS stated that the Coast Guard, with the Department of Justice, has successfully prosecuted responsible parties for spills outside the corridor. The full text of DHS’s comments is included in appendix VII. EPA provided the following three comments on our report. First, EPA stated that it does respond to every spill, whether directly or indirectly, in the same way that the Coast Guard responds. However, EPA could not provide documentation of its response efforts whereas the Coast Guard provided documentation that indicated what actions were taken in response to each spill. In addition, EPA stated that its spill responders rely heavily on Michigan DEQ to respond to spills and coordinate response actions with the Coast Guard. EPA also said that it responded directly to greater than one percent of spills—as opposed to less than one percent, as previously stated by EPA officials. Nevertheless, the level of EPA’s response is unclear due to lack of documentation. Second, EPA commented on our recommendation that it update spill information maintained by the NRC. We acknowledge that EPA does not modify spill data maintained by the NRC; however, our recommendation was that it explore the feasibility of updating spill information maintained by the NRC by informing NRC of duplicate spill reports. While EPA maintains that information on spills can now be updated using a new system called Web EOC, and our report acknowledges that Web EOC is a method to update spill data electronically, the extent of its use is uncertain according to EPA officials. Finally, EPA commented on the feasibility of our recommendation that it gather information on facilities that are covered under its spill prevention program. EPA stated that there is no authority in the Clean Water Act or the prevention regulations for facilities to provide this information to EPA. It further stated that under the Paperwork Reduction Act the agency would need to seek approval from the Office of Management and Budget. However, EPA has previously identified SPCC facilities in the corridor. If the agency determines that formal rulemaking is necessary for it to gather information on which facilities are covered under its spill prevention program, then we believe it should consider undertaking such a rulemaking. EPA officials also provided specific technical comments and clarifications on the draft report that we have incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate Congressional Committees, the EPA Administrator, the Secretary of Homeland Security, and various other federal and state agencies. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. We were asked to examine (1) how many oil and hazardous substance spills of more than 50 gallons (or of an unknown volume) were reported in the St. Clair–Detroit River corridor from 1994 to 2004, and how accurately reported spills reflect the extent of actual spills; (2) what processes are used to notify parties of spills, and whether they contain explicit requirements for reporting times and spill magnitude; and (3) the extent of Environmental Protection Agency (EPA) and the Coast Guard’s spill prevention efforts and enforcement activities in the St. Clair–Detroit River corridor from 1994 through 2004. To determine the number of oil and hazardous substance spills of more than 50 gallons (or of an unknown volume) reported in the St. Clair– Detroit River corridor from 1994 to 2004, and to what extent they represent actual spills, we obtained information on spills with those characteristics reported in the St. Clair River, Lake St. Clair, the Detroit River, and a highly industrialized tributary, the Rouge River. We obtained data sets with these attributes from the National Response Center (NRC), EPA Region 5, the Coast Guard’s Headquarters, Michigan’s Department of Environmental Quality (Michigan DEQ), and the Spills Action Centre (SAC) operated by the Ontario Ministry of Environment. To assess the reliability of each data set we questioned and interviewed knowledgeable officials about the data and the systems that produced them, and manually reviewed the data. Limitations to the data are discussed in the report and in appendix II. When appropriate, we analyzed the data sets individually to determine spill frequency over time and spill characteristics, such as volume and the type of material spilled. We were not able to combine the spill data sets for analysis because each entity tracks spills differently, and we were limited in what we could conclude from the individual data sets because the degree to which they are updated varies widely. We also obtained EPA and Michigan DEQ data sets related to other pollutant discharges, such as combined sewer overflows (CSO) and industrial discharge permit violations, to provide context for the spill data and obtain more complete information on pollutants discharged into the water bodies of the St. Clair–Detroit River corridor. These data are likely subject to the same limitations as the spill data, in that industrial permit violations and CSOs are self-reported and facilities may be reluctant to report these events; however, spills may be particularly subject to underreporting because they are not part of a structured program as are CSOs and industrial permit violations. To assess what processes are used to notify parties of spills and whether they contain explicit requirements for reporting times and spill magnitude, we reviewed applicable laws and spill notification agreements and obtained information on implementation of these agreements from EPA, the Coast Guard, Michigan DEQ, the Michigan State Police, and Canadian officials. In addition, we obtained and analyzed documentation on six spills to better understand how the notification process was conducted in specific incidents. We selectively sampled the spill data sets for spills to illustrate the implementation of notification practices under various scenarios, including spills with source locations in both the United States and Canada and spills of differing materials and volumes. We questioned the 17 drinking water facility operators on the U.S. side of the corridor to obtain their perspectives on the timeliness of spill notification. We further obtained information on the automated monitoring system maintained by Sarnia-Lambton Environmental Association, planned automated monitoring on the U.S. side of the corridor, and the monitoring conducted by the Ohio River Valley Water Sanitation Commission. To determine the extent of EPA and the Coast Guard’s spill prevention efforts and enforcement activities in the St. Clair–Detroit River corridor from 1994–2004, we first obtained and analyzed laws, regulations, and agency policies regarding spill prevention and enforcement. This included obtaining information on the potential enforcement penalty dollar amounts. We also obtained data from EPA, the Coast Guard, Michigan DEQ, and the Ontario Ministry of Environment on spill-related enforcement actions taken in the corridor since 1994. We analyzed the information to determine the number of inspections conducted, the types of violations found, and the penalties assessed for each documented violation. Finally, we obtained information from the various agencies on the resources devoted to inspections and enforcement; the use of those resources; and priorities in using the resources. We performed our work from September 2005 to June 2006 in accordance with generally accepted government auditing standards. Spill data sets were available from four sources: EPA, the Coast Guard, Michigan DEQ, and the Ontario SAC. Each data set is unique; however, some spill incidents are found in multiple data sets and therefore they cannot be combined or consolidated. The relative quality of each data set depends in part on whether it is updated after additional information is obtained from spill investigations or whether minimal updates are made, as with the EPA spill data set. Generally, all of the spill data sets have a common data reliability limitation which stems from uncertainty regarding whether all incidents are reported. Of note, the data sets for EPA and the Ontario SAC contained a large number of incidents with unknown volumes. EPA’s spill data set is not routinely updated after EPA responders conduct investigations. Therefore, the data reflect preliminary information about spills received from the NRC, and the data likely do not represent the actual number and nature of spills. We are presenting this data for informational purposes only. The data set contained a total of 916 spill incidents that occurred in the St. Clair River, Lake St. Clair, Detroit River, and Rouge River from 1994 through 2004 and that had volumes of greater than 50 gallons (or of an unknown volume). About 45 percent of the spills were oil-related. The number of spills has varied over time, not showing either an increasing or decreasing trend. The EPA data showed that the greatest number of spills occurred in 1994. Coast Guard officials update spill data after investigations are conducted, thereby strengthening the reliability of their spill data. However, they are unable to update preliminary volume estimates and therefore these data are likely unreliable. There are 51 spill incidents in the Coast Guard data set and the majority of spills—roughly 70 percent—were oil-related. The Coast Guard’s spill data set indicates that 11 spills were traced back to storm or sanitary sewer outfalls. In four of these instances, narratives completed by spill responders indicate that sewage was mixed with other spill materials. The Coast Guard’s data shows that the greatest number (26 of the 51) spills occurred in the Detroit River. Most of the oil spills investigated by the Coast Guard were in the Detroit and Rouge Rivers. Similarly, most of the chemical spills that the Coast Guard investigated were in the Detroit River, while most of the gasoline spills were in Lake St. Clair. Michigan DEQ officials update their spill data after investigations are conducted, but some data fields (e.g., quantity of material released) are not completed because the information is unknown. There are 21 spill incidents in the Michigan DEQ spill data set that occurred in the St. Clair River, Lake St. Clair, Detroit River, and Rouge River from 1996 through 2004 that have volumes of greater than 50 gallons (or of an unknown volume). Michigan DEQ did not provide spills prior to 1996 because that is when they began collecting spill data electronically. Ontario Ministry of Environment officials update spill data to reflect additional information obtained. However, not all data fields are completed because information such as spill quantities and materials are not always known. There are a total of 157 spill incidents in the SAC data that occurred in the St. Clair River (105), Lake St. Clair (5), and the Detroit River (47) between 1994 and 2004 that have volumes greater than 50 gallons (or of an unknown volume). About 9 percent of the 157 have unknown responsible parties, and 127 of the 157 have unknown volumes or masses. Michigan DEQ’s CSO data were available as of 1999, when Michigan DEQ began tracking sewer overflows. The CSO data, like spill data, have a data- reliability limitation relating to uncertainty as to whether all CSO events are reported; however, spills may be particularly subject to underreporting because they are not part of a structured program as CSOs and industrial permit violations are. CSO data provide additional information in terms of the amount and location of pollutant discharges into the waters of the corridor. According to EPA, CSOs contain storm water, untreated human and industrial waste, toxic materials, and debris. The roughly 1,400 CSOs that were greater than 50 gallons (or of an unknown volume) greatly exceeded the number of spills that met these criteria during the 6-year period. The largest category of CSOs was of diluted raw sewage. The Rouge and Detroit Rivers received most of the CSOs, with 1,296 incidents. CSOs accounted for over 900,000 million gallons of partially treated sewage discharged into waters of the corridor. The National Pollutant Discharge Elimination System (NPDES) requires industrial and municipal facilities to obtain permits to discharge pollutants into U.S. waters. Such permits establish required effluent limitations or best management practices. The industrial effluent violation data we obtained from EPA rely upon self-reporting by industries, and therefore the data have the same data reliability limitation as spills and CSO data in terms of uncertainty about whether all events are reported. In addition, volumes are not commonly reported with effluent discharge violations as toxicity is a greater concern—and therefore volume data are limited. However, the data provide additional information on pollutant discharges in the corridor. From 1994 through 2004, there were a total of 2,257 NPDES industrial effluent violations in the St. Clair River, Lake St. Clair, Detroit River, and Rouge River. Of these violations, 1,871 (or about 83 percent) of the total had volumes of greater than 50 gallons (or of an unknown volume). The two largest NPDES discharge violations, in terms of volume, related to oil and grease—and these were discharged by the same facility in 1994 only a few months apart. The most frequently discharged materials were solid pollutants, pH-altering materials, oil and grease, and materials that had the potential to alter oxygen availability in the receiving waters. Solid pollutants include pollutants found in wastewater that were not removed during the treatment process and can cause toxic conditions or contaminate sediment. From 1994 through 2004 the volume of discharged materials was available for 204 of the 1,871 permit violations. For the remaining 1,667 (or 89 percent) of the violations, the volume was not available. Over 50 percent of the materials discharged by industries in violation of their permits were solid pollutants, oil and grease, zinc, or materials that alter the pH or oxygen available in the receiving waters into which they were discharged. Over 52 percent of the NPDES violations occurred at 12 facilities, and 1 facility had 176 violations during the 11-year time frame. From May 16 at 1:00 p.m. to May 17 at 1:00 p.m., two spills of approximately 14–15 million gallons of storm water mixed with ethylene glycol and propylene glycol (deicing agents) were discharged into a storm sewer leading to the Detroit River. The responsible party claimed that the release was due to blockage in a 10-inch pipe from a holding pond containing the material to the sanitary sewer system. Initially, EPA estimated that 10,000 fish were killed due to depletion of dissolved oxygen in the waterway. On May 18, the NRC received a spill report from an observer who saw fish dying. NRC reported the spill to EPA, the Coast Guard, and Michigan DEQ shortly after 6:00 p.m. At 6:53 p.m., EPA contacted the Michigan Pollution Emergency Alerting System (PEAS) hotline. At 8:00 p.m., the PEAS operator contacted Michigan DEQ Water Bureau staff. At 9:00 p.m., the EPA on-scene coordinator notified the Michigan DEQ and the Coast Guard. When the Michigan DEQ spill responder did not arrive on-scene, the PEAS operator called the Michigan DEQ District Supervisor at 11:30 p.m. On May 19 at 12:15 a.m., the Michigan DEQ District Supervisor contacted a spill responder, saying that EPA had been on the scene and was requesting Michigan DEQ representation. At 8:00 a.m. on May 19, DEQ staff arrived at the scene. At 10:30 a.m., two Coast Guard responders arrived at the scene. At 5:00 p.m. on May 20, tanker trucks flushed out an isolated section of the affected sewer drain with clean water. A pump was installed to pump water to the nearby wastewater treatment facility. This lasted until 10:00 a.m. on May 21. The responsible party notified Michigan DEQ of the May 16 and 17 discharges on May 22. The facility that is responsible for these discharges has an industrial NPDES permit. Michigan DEQ agreed to accept best management practices instead of numeric pollutant limits for the summer discharges from this facility. So from May through September, the facility’s permit had no limitations on oxygen-depleting materials. Michigan DEQ’s understanding was that all discharges containing high amounts of oxygen- depleting materials would be directed to the sanitary sewer, for further treatment at the wastewater treatment facility. On August 14, at approximately 4:45 p.m., 34 gallons of vinyl chloride monomer were discharged into the St. Clair River. The spill lasted for almost 12 hours. On the following day, another spill of 5 gallons of this substance was discharged into the river. The cause of the spill was a cracked tube in a cooling water system heat exchanger. The responsible party did not report the spill to the Ontario SAC until August 19, because an electrical blackout caused monitoring equipment to be inoperable. Ontario Ministry of Environment staff implemented procedures to warn downstream intakes and take samples. All samples at Canadian reservoirs came back negative. In addition, the Ministry of Environment ran models to determine potential impacts. Ministry of Environment officials did not issue an advisory, but Chatham Health Unit did issue a bottled-water advisory for Wallaceburg municipal supply consumers. Models run by the Ministry of Environment showed that vinyl chloride levels would be below the drinking water standards (2 parts per billion). There are 12 intakes serving Michigan public water systems in the St. Clair watershed between Port Huron and Detroit. Michigan DEQ scientists reviewed the incident and determined that the amount of vinyl chloride lost, based on a spill of 650 lbs., would not have resulted in concentrations at Michigan drinking water plant intakes exceeding the maximum contaminant level and that no human health risks resulted from the event. No sampling of Michigan drinking water plant intakes was conducted upon notification of the incident because data collected would not have been useful due to the rapid flow rate of the river at the time of the event. On February 1, from 3:00 to 4:20 a.m., an estimated 39,626 gallons of methyl ethyl ketone and methyl isobutyl ketone were discharged into the St. Clair River. At 5:31 a.m., the responsible party reported to the Ontario SAC that they had identified a leaking heat exchanger at the lube plant, which resulted in contamination of their cooling water. At 6:40 a.m., SAC staff briefed the Michigan State Police on the incident. At 7:22 a.m., the Michigan DEQ’s Pollution Emergency Communications Coordinator contacted the relevant Michigan DEQ staff. Michigan DEQ staff contacted the SAC for more information at 7:45 a.m. Michigan DEQ then notified Michigan drinking water facilities between 8:00 and 9:00 a.m. After 11:00 a.m., the Michigan DEQ made a decision to recommend that drinking water facilities shut their intakes. Drinking water facilities in Port Huron, Marysville, St. Clair, East China Township, Marine City, Algonac, Ira Township, New Baltimore, Mt. Clemens, Grosse Pointe Farms, Highland Park, and Wyandotte were advised of the situation and all plants except Port Huron, the Detroit plants, and Wyandotte were asked to shut down by Michigan DEQ. The spill caused more than a dozen water plants on either side of the river to close their intakes. About 36,000 customers in the St. Clair and Macomb County communities of Marysville, St. Clair, East China Township, Marine City, Algonac, and Ira Township were adversely impacted by the intake closures. On May 23, at 4:10 a.m. and 6:00 a.m., an unknown number of gallons of oily water were discharged into the St. Clair River. Heavy rains caused three oil separators to overflow. At 6:05 a.m., the responsible party reported the spills to the Ontario SAC. The responsible party began sampling and told Ontario officials that there were no visible signs of oil or contaminants. At 7:40 a.m., the Ontario SAC notified Michigan officials through the PEAS hotline. From 8:30 to 9:30 a.m., a Michigan DEQ official notified Michigan drinking water facilities. The Ohio River Valley Sanitation Commission (ORSANCO) was established in 1948 in order to control and abate pollution in the Ohio River Basin. ORSANCO is an interstate commission representing eight states and the federal government. Member states include Illinois, Indiana, Kentucky, New York, Ohio, Pennsylvania, Virginia, and West Virginia. ORSANCO has programs to improve water quality in the Ohio River and its tributaries. Their tasks include setting wastewater discharge standards, performing biological assessments, monitoring the chemical and physical properties of the waterways, and conducting special surveys and studies. In addition, ORSANCO coordinates emergency response activities for spills or accidental discharges to the river and coordinates public participation in programs. In 1977, an unreported discharge of hazardous chemicals contaminated drinking water facilities in the corridor. Due to the lack of a coordinated monitoring system, misinformation was distributed to the public, causing concern for the safety of the drinking water. This incident demonstrated the vulnerability of the Ohio River water intakes to spills and led to the development of the Organics Detection System. ORSANCO, in conjunction with drinking water utilities, identified strategic locations along the river where monitoring for chemicals would be most beneficial and protective of drinking water intakes. ORSANCO suggested that water facilities located at strategic points along the river could perform routine monitoring for oil and hazardous chemical discharges. ORSANCO proposed that they serve as technical coordinator and information clearinghouse, providing statewide communications in the event of a spill. Currently, ORSANCO maintains an inventory of water intakes, wastewater discharges, and material transfers on the Ohio River. Also, a time-of-travel model is used to estimate the arrival time of contaminant discharges during spill events. The results of the model have been used to identify the locations of the Organics Detection System. The Organics Detection System was established in 1978 and participants include 11 water utilities, one chemical manufacturer, and one power generating facility. Data from each facility is to be downloaded for review and evaluation on a weekly basis. Each instrument can detect and quantify twenty-two organic compounds. The list of compounds represents the organic chemicals of greatest concern to water utilities and most likely to be detected based on an inventory of chemicals stored, transported, and manufactured along the Ohio River. Facility operators are required to notify ORSANCO when detection of a compound over a specified threshold is observed or when an unidentified compound is detected. When this occurs, plant operating personnel are notified of the contaminant so treatment techniques to remove the compound can be implemented. ORSANCO notifies downstream water utilities and state and federal water quality and emergency response agencies, including the NRC. In addition to the individual named above, Kevin Bray, John Delicath, Michele Fejfar, Jill Roth Edelson, Katheryn Summers Hubbell, Jamie Meuwissen, and John Wanska made key contributions to this report.
Spills of oil and hazardous substances in the St. Clair-Detroit River corridor have degraded this border area between the United States and Canada and are a potential threat to local drinking water supplies. Within the United States such spills are reported to the National Response Center (NRC), and in Canada to the Ontario Spills Action Centre. This report discusses (1) how many oil and hazardous substance spills greater than 50 gallons (or of an unknown volume) were reported in the corridor from 1994 to 2004, and how accurately reported spills reflect the extent of actual spills; (2) what processes are used to notify parties of spills, and if they contain explicit requirements for reporting times and spill magnitude; and (3) the extent of Environmental Protection Agency (EPA) and the Coast Guard's spill prevention efforts and enforcement activities in the corridor from 1994 to 2004. The NRC received 991 spill reports and the Ontario Spills Action Centre received 157 reports of spills in the corridor from 1994 through 2004, but these reports do not accurately portray the actual number or volume of spills. Many spills go unreported by responsible parties because they do not understand or fail to comply with reporting requirements. Further, multiple reports for the same spill are often recorded by NRC and provided to EPA and the Coast Guard for investigation. EPA does not remove all duplicate spill reports or update its data after investigating spills. Coast Guard officials update their spill data after investigations but they are unable to update spill volume estimates due to automated system limitations. GAO also found that, according to agency data sets, other events--combined sewer overflows (CSOs) and industrial permit violations--occurred more frequently than spills in the corridor. While data on industrial permit violations and CSOs might be subject to the same limitations as the spill data because the data are self reported and facilities may not report all of these events, spills may be particularly subject to underreporting because they are not part of a structured program as CSOs and industrial permit violations are. There are multiple parties involved in spill notification in the corridor and agreements outlining U.S.-Canadian notification processes are not explicit about reporting times or the magnitude of spills that warrant notification. The coast guards of each country have agreed to notify one another of spills primarily when a joint response may be needed. Another agreement between Michigan and Ontario officials calls for notifying each other of spills that may have a joint impact. We reviewed six selected spill incidents that illustrate the various ways that notification can occur. The drinking water facility operators we contacted on the U.S. side of the corridor had differing perspectives on current notification processes, and the majority expressed concern that their facilities could be contaminated by spills if they are not notified in a timely manner. Finally, efforts have been made to develop informal notification processes between individual industries or trade associations and drinking water facilities. EPA's spill prevention program is limited and the Coast Guard addresses spill prevention as part of other compliance efforts. EPA's prevention program addresses only oil spills. Further, EPA is uncertain of which specific facilities are subject to regulation under its spill prevention program, and conducts varying numbers of inspections per year. EPA inspections uncovered significant spill prevention deficiencies, whereas the Coast Guard's inspections revealed minor issues. The agencies issued a total of 16 penalties for spills and program noncompliance during the period we reviewed.
SSI provides financial assistance to people who are age 65 or older, blind or disabled, and who have limited income and resources. The program provides individuals with monthly cash payments to meet basic needs for food, clothing, and shelter. Last year, about 6.8 million recipients were paid about $33 billion in SSI benefits. During the application process, SSA relies on state Disability Determination Services to make the initial medical determination of eligibility while SSA field offices are responsible for determining whether applicants meet the program’s nonmedical (age and financial) eligibility requirements. To receive SSI benefits in 2002, individuals may not have income greater than $545 per month ($817 for a couple) or have resources worth more than $2,000 ($3,000 for a couple). When applying for SSI, individuals are required to report any information that may affect their eligibility for benefits. Similarly, once individuals receive SSI benefits, they are required to report events, such as changes in income, resources, marital status, or living arrangements to SSA field office staff in a timely manner. A recipient’s living arrangement can also affect monthly benefits. Generally, individuals who rent, own their home, or pay their share of household expenses if they live with other persons receive a higher monthly benefit than those who live in the household of another person and receive food and shelter assistance. To a significant extent, SSA depends on program applicants and recipients to accurately report important eligibility information. However, to verify this information SSA uses computer matches to compare SSI records against recipient information contained in records of third parties, such as other federal and state government agencies. To determine whether recipients remain financially eligible for SSI benefits after the initial assessment, SSA also periodically conducts redetermination reviews to verify eligibility factors such as income, resources, and living arrangements. Recipients are reviewed at least every 6 years, but reviews may be more frequent if SSA determines that changes in eligibility are likely. Since its inception, the SSI program has been difficult and costly to administer because even small changes in monthly income, available resources, or living arrangements can affect benefit amounts and eligibility. Complicated policies and procedures determine how to treat various types of income, resources, and in-kind support and maintenance that a recipient receives. SSA must constantly monitor these situations to ensure benefit amounts are paid accurately. On the basis of our work, which spans more than a decade, we designated SSI a high-risk program in 1997 and initiated work to document the underlying causes of longstanding SSI program problems and the impact these problems have had on program performance and integrity. In 1998, we reported on a variety of management problems related to the deterrence, detection, and recovery of SSI overpayments. Over the last several years, we also testified about SSA’s progress in addressing these issues (see app. I). Since 1998, SSA has demonstrated a stronger management commitment to SSI program integrity issues. SSA has also expanded the use of independent data to verify eligibility factors and enhanced its ability to detect payment errors. Today, SSA has far better capability to more accurately verify program eligibility and detect payment errors than it did several years ago. However, weaknesses remain in its debt prevention and deterrence processes. SSA has made limited progress toward simplifying complex program rules that contribute to payment errors and is not fully utilizing several overpayment prevention tools, such as penalties and the suspension of benefits for recipients who fail to report eligibility information as required. Since our 1998 report, SSA has taken a variety of actions that demonstrate a fundamental change in its management approach and a much stronger commitment to improved program integrity. First, SSA issued a report in 1998 that outlined its strategy for strengthening its SSI stewardship role.This report highlighted specific planned initiatives to improve program integrity and included timeframes for implementation. In addition to developing a written SSI program integrity strategy, SSA submitted proposals to Congress requesting new authorities and tools to implement its strategy. In December 1999, Congress provided SSA with several newly requested tools in the Foster Care Independence Act of 1999. The act gave SSA new authorities to deter fraudulent or abusive actions, better detect changes in recipient income and financial resources, and improve its ability to recover overpayments. Of particular note is a provision in the act that strengthened SSA’s authority to obtain applicant resource information from banks and other financial institutions. SSA’s data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. SSA also sought and received separate legislative authority to penalize persons who misrepresent material facts essential to determining benefit eligibility and payment amounts. SSA can now impose a period of benefit ineligibility ranging from 6 to 24 months for individuals who knowingly misrepresent facts. SSA also made improved program integrity one of its five agency strategic goals and established specific objectives and performance indicators to track its progress towards meeting this goal. For example, the agency began requiring its field offices to complete 99 percent of their assigned redetermination reviews and other cases where computer matching identified a potential overpayment situation due to unreported wages, changes in living arrangements, or other factors. During our review, most field staff and managers that we interviewed told us that SSA’s efforts to establish more aggressive goals and monitor performance toward completing these reviews was a clear indication of the new enhanced priority it now places on ensuring timely investigation of potential SSI overpayments. To further increase staff attention to program integrity issues, SSA also revised its work measurement system—used for estimating resource needs, gauging productivity, and justifying staffing levels—to include staff time spent developing information for referrals to its Office of Inspector General (OIG). In prior work, we reported that SSA’s own studies showed that its employees felt pressured to spend most of their time on “countable” workloads, such as quickly processing and paying claims rather than on developing fraud referrals for which they received no credit. Consistent with this new emphasis, the OIG also increased the level of resources and staff devoted to investigating SSI fraud and abuse; key among the OIG’s efforts is the formation of Cooperative Disability Investigation (CDI) teams in 13 field locations. These teams consist of OIG investigators, SSA staff, state or local law enforcement officers, and state DDS staff who investigate suspicious medical claims through surveillance and other techniques. A key focus of the CDI initiative is detecting fraud and abuse earlier in the disability determination process to prevent overpayments from occurring. The OIG reported that the teams saved almost $53 million in fiscal year 2001 in improper benefit payments by providing information that led to a denial of a claim or the cessation of benefits. Finally, in a June 2002 corrective action plan, SSA reaffirmed its commitment to taking actions to facilitate the removal of the SSI program from our high-risk list. This document described SSA’s progress in addressing many of the program integrity vulnerabilities we identified and detailed management’s SSI program priorities through 2005. To ensure effective implementation of this plan, SSA has assigned senior managers responsibility for overseeing key initiatives, such as piloting new quality assurance systems. The report also highlighted several other program integrity initiatives under consideration by SSA, including plans to test whether touchtone telephone technology can improve the reporting of wages, credit bureau data can be used to detect underreported income, and public databases can help staff identify unreported resources, for example, automobiles and real property. To assist field staff in verifying the identity of recipients, SSA is also exploring the feasibility of requiring new SSI claimants to be photographed as a condition of receiving benefits. In prior work, we noted that SSA’s processes and procedures for verifying recipients’ income, resources, and living arrangements were often untimely and incomplete. In response to our recommendations, SSA has taken numerous actions to verify recipient reported information and better detect and prevent SSI payment errors. SSA has made several automation improvements to help field managers and staff better control overpayments. For example, last year, the agency distributed software nationwide that automatically scans multiple internal and external databases containing recipient financial and employment information and identifies potential changes in income and resources. The system then generates a consolidated report for use by staff when interviewing recipients. SSA also made systems enhancements to better identify newly entitled recipients with uncollected overpayments from a prior coverage period. Previously, each time an individual came on and off the rolls over a period of years, staff had to search prior SSA records and make system inputs to bring forward any outstanding overpayments to current records. The process of detecting overpayments from a prior eligibility period and updating recipient records now occurs automatically. SSA’s data show that, since this tool was implemented in 1999, the monthly amount of outstanding overpayments transferred to current records increased on average by nearly 200 percent, from $12.9 million a month to more than $36 million per month. Thus, a substantial amount of outstanding overpayments that SSA might not have detected under prior processes is now subject to collection action. Nearly all SSA staff and managers that we interviewed told us that systems enhancements have improved SSA’s ability to control overpayments. In commenting on this report, SSA said that it will soon implement another systems enhancement to improve its overpayment processes. SSA will automatically net any overpayments against underpayments that exist on a recipient’s record before taking any recovery or reimbursement actions. Presently, netting requires SSA employees to record a series of transactions and many opportunities to recover overpayments by netting them against existing underpayments are lost. SSA estimates that automating the netting process will reduce overpayments by up to $60 million each year, with a corresponding reduction in underpayments paid to beneficiaries. In addition to systems and software upgrades, SSA now uses more timely and comprehensive data to identify information that can affect SSI eligibility and benefit amounts. For example, in accordance with our prior recommendation, SSA obtained access to the Office of Child Support Enforcement’s National Directory of New Hires (NDNH), which is a comprehensive source of unemployment insurance, wage, and new hires data for the nation. In January 2001, SSA began providing field offices with direct access to NDNH and required its use to verify applicant eligibility during the initial claims process. With NDNH, SSA field staff now have access to more comprehensive and timely employment and wage information essential to verifying factors affecting SSI eligibility. More timely employment and wage information is particularly important, considering that SSA studies show that unreported compensation accounts for about 25 percent of annual SSI overpayments. SSA has estimated that use of NDNH will result in about $200 million in overpayment preventions and recoveries per year. Beyond obtaining more effective eligibility verification tools such as NDNH, SSA has also enhanced existing computer data matches to verify financial eligibility. For example, SSA increased the frequency (from annually to semiannually) in which it matches SSI recipient social security numbers (SSN) against its master earnings record, which contains information on the earnings of all social security-covered workers. In 2001, SSA flagged over 206,000 cases for investigation of unreported earnings, a threefold increase over 1997 levels. To better detect individuals receiving unemployment insurance benefits, quarterly matches against state unemployment insurance databases have replaced annual matches. Accordingly, the number of unemployment insurance detections has increased from 10,400 in 1997 to over 19,000 last year. SSA’s ability to detect nursing home admissions, which can affect SSI eligibility, has also improved. In 1997, we reported that SSA’s database for identifying SSI recipients residing in nursing homes was incomplete and its verification processes were untimely, resulting in substantial overpayments. At the time, this database included only 28 states and data matches were conducted annually. SSA now conducts monthly matches with all states, and the number of overpayment detections related to nursing home admissions has increased substantially from 2,700 in 1997 to 75,000 in 2001. SSA’s ability to detect recipients residing in prisons has also improved. Over the past several years, SSA has established agreements with prisons that house 99 percent of the inmate population, and last year SSA reported suspending benefits to about 54,000 prisoners. Recipients are ineligible for benefits in any given month if throughout that month they are in prison. SSA has also increased the frequency in which it matches recipient SSNs against tax records and other data essential to identify any unreported interest, income, dividends, and pension income individuals may be receiving. These matching efforts have also resulted in thousands of additional overpayment detections over the last few years. To obtain more current information on the income and resources of SSI recipients, SSA has also increased its use of online access to various state data. Field staff can directly query various state records to quickly identify workers’ compensation, unemployment insurance, or other state benefits individuals may be receiving. In 1998, SSA had online access to records in 43 agencies in 26 states. As of January 2002, SSA had expanded this access to 73 agencies in 42 states. As a tool for verifying SSI eligibility, direct online connections are potentially more effective than using periodic computer matches, because the information is more timely. Thus, SSA staff can quickly identify potential disqualifying income or resources at the time of application and before overpayments occur. In many instances, this allows the agency to avoid having to go through the often difficult and unsuccessful task of having to recover overpaid SSI benefits. During our field visits, staff and managers who had online access to state databases believed this tool was essential to more timely verification of recipient-reported information. SSA’s efforts to expand direct access to additional states’ data are ongoing. Finally, to further strengthen program integrity, SSA took steps to improve its SSI financial redetermination review process to verify that individuals remain eligible for benefits. First, SSA increased the number of annual reviews from 1.8 million in fiscal year 1997 to 2.4 million in 2001. Second, SSA substantially increased the number of redeterminations conducted through personal contact with recipients, from 237,000 in 1997 to almost 700,000 this year. SSA personally contacts those recipients that it believes are most likely to have payment errors. Third, because budget constraints limit the number of redeterminations SSA conducts, it refined its profiling methodology in 1998 to better target recipients that are most likely to have payment errors. Refinements in the selection methodology have allowed SSA to leverage its resources. SSA’s data show that, in 1998, refining the case selection methodology increased estimated overpayment benefits— amounts detected and future amounts prevented—by $99 million over the prior year. SSA officials have estimated that conducting substantially more redeterminations would yield hundreds of millions of dollars in additional overpayment benefits annually. However, officials from its Office of Quality Assurance and Performance Assessment indicated that limited resources would affect SSA’s ability to do more reviews and still meet other agency priorities. In June 2002, SSA informed us that the Commissioner recently decided to make an additional $21 million available to increase the number of redeterminiations this year. Despite its increased emphasis on overpayment detection and deterrence, SSA is not meeting its payment accuracy goals and it is too early to determine what impact its actions will ultimately have on its ability to make more accurate benefit payments. In 1998, SSA pledged to increase its SSI overpayment accuracy rate from 93.5 percent to 96 percent by fiscal year 2002. Since that time, however, SSA has revised this goal downward twice and for fiscal year 2001 it was 94.7 percent. Current agency plans do not anticipate achieving the 96-percent accuracy rate until 2005. Various factors may account for SSA’s inability to achieve its SSI accuracy goals, including lag times between the occurrence of an event affecting eligibility and SSA’s receipt of the information. In addition, key initiatives that might improve SSI overpayment accuracy have only recently begun or are in the early planning stages. For example, it was not until January 2001 that SSA began providing field offices with access to the NDNH database to verify applicants’ employment status and wages. SSA also only recently required staff to use NDNH when conducting post entitlement reviews of individuals’ continued eligibility for benefits. In fiscal year 2000, SSA estimated that overpayments attributable to wages—historically the number one source of SSI overpayments—were about $477 million or 22 percent of its payment errors. Thus, with full implementation, the impact of NDNH on overpayment accuracy rates may ultimately be reflected in future years. Furthermore, the Foster Care Independence Act of 1999 strengthened SSA’s authority to obtain applicant resource information from financial institutions. SSA’s data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. Last year, overpayments attributable to this category totaled about $394 million, or 18 percent of all detections. In May 2002, SSA issued proposed regulations on its new processes for accessing recipient financial data and plans to implement a pilot program later this year. When fully implemented, this tool may also help improve the SSI payment accuracy rate. SSA has made only limited progress toward addressing excessively complex rules for assessing recipients’ living arrangements, which have been a significant and longstanding source of payment errors. SSA staff must apply a complex set of policies to document an individual’s living arrangements and the value of in-kind support and maintenance (ISM) being received, which are essential to determining benefit amounts. Details such as usable cooking and food storage facilities with separate temperature controls, availability of bathing services, and whether a shelter is publicly operated can affect benefits. These policies depend heavily on recipients to accurately report whether they live alone or with others; the relationships involved; the extent to which rent, food, utilities, and other household expenses are shared; and exactly what portion of those expenses an individual pays. Over the life of the program, those policies have become increasingly complex as a result of new legislation, court decisions, and SSA’s own efforts to achieve benefit equity for all recipients. The complexity of SSI program rules pertaining to living arrangements, ISM, and other areas of benefit determination is reflected in the program’s administrative costs. In fiscal year 2001, SSI benefit payments represented about 6 percent of benefits paid under all SSA-administered programs, but the SSI program accounted for 31 percent of the agency’s administrative resources. Although SSA has examined various options for simplifying rules concerning living arrangements and ISM over the last several years, it has yet to take action to implement a cost-effective strategy for change. In December 2000, SSA issued a report examining six potential simplification options for living arrangements and ISM relative to program costs and three program objectives: benefit adequacy (ensuring a minimum level of income to meet basic needs); benefit equity (ensuring that recipients with like income, resources, and living arrangements are treated the same); and program integrity (ensuring that benefits are paid accurately, efficiently, and with no tolerance for fraud). SSA’s report noted that overpayments attributable to living arrangements and ISM in 1999 accounted for a projected $210 million, or 11 percent, of total overpayment dollars. The report also acknowledged that most overpayments were the result of beneficiaries not reporting changes in living arrangements and SSA staff’s failure to comply with complicated instructions for verifying information. SSA concluded that none of the options analyzed supported all of its SSI program goals. As a result, SSA recommended further assessing the tradeoffs among program goals presented by these simplification options. SSA’s study shows that at least two of the options would produce net program savings. For example, one option eliminated the need to determine whether an individual is living in another person’s household by counting ISM at the lesser of its actual value or one-third of the federal benefit rate. In addition to ultimately reducing program costs, SSA noted that this option would eliminate several inequities in current ISM rules and increase benefits for almost 1 percent of recipients. Although SSA cited some disadvantages (such as, additional development/calculations in some cases and decreasing benefits for about 2 percent of recipients), its analysis did not indicate that the disadvantages outweighed potential positive effects. Furthermore, for two other options in which SSA projected a large increase in program costs, it acknowledged that its estimates were based on limited data and were “very rough.” Thus, actual program costs associated with these options could be significantly lower or higher. Finally, to the extent that SSA identified limitations in some options analyzed, such as reductions in benefits for some recipients, it did not propose any modifications or alternatives to address them. SSA’s actions to date do not sufficiently address concerns about complex living arrangement and ISM policies. During our recent fieldwork, staff and managers continued to cite program complexity as a problem leading to payment errors, program abuse, and excessive administrative burdens. In addition, overpayments associated with living arrangements and ISM remain among the leading causes of overpayments behind unreported wages and resources, respectively. Finally, SSA’s fiscal year 2000 payment accuracy report noted that it would be difficult to achieve SSI accuracy goals without some policy simplification initiatives. In its recently issued “SSI Corrective Action Plan,” SSA stated that within the next several years it plans to conduct analyses of alternative program simplification options beyond those already assessed. Our work shows that administrative penalties and sanctions may be underutilized in the SSI program. Under the law, SSA may impose administrative penalties on recipients who do not file timely reports about factors or events that can affect their benefits—changes in wages, resources, living arrangements, and other support being received. An administrative penalty causes a reduction in 1 month’s benefits. Penalty amounts are $25 for a first occurrence, $50 for a second occurrence, and $100 for the third and subsequent occurrences. The penalties are meant to encourage recipients to file accurate and timely reports of information so that SSA can adjust its records to correctly pay benefits. The Foster Care Independence Act also gave SSA authority to impose benefit sanctions on persons who misrepresent material facts that they know, or should have known, were false or misleading. In such circumstances, SSA may suspend benefits for 6 months for the initial violation, 12 months for the second violation, and 24 months for subsequent violations. SSA issued interim regulations to implement these sanction provisions in July 2000 and its November 2000 report cited its implementation as a priority effort to improve SSI program integrity. In our 1998 report, we noted that penalties were rarely used and recommended that SSA reassess its policies for imposing penalties on recipients who fail to report changes that can affect their eligibility. To date, SSA has not addressed our recommendation and staff rarely use penalties to encourage recipient compliance with reporting policies. Over the last several years, SSA data indicate that about 1 million recipients are overpaid annually and that recipient nonreporting of key information accounted for 71 to 76 percent of payment errors. On the basis of SSA records, we estimate that at most about 3,500 recipients were penalized for reporting failures in fiscal year 2001. SSA staff we interviewed cited the same obstacles or impediments to imposing penalties as noted in our 1998 report, such as: (1) penalty amounts are too low to be effective, (2) imposition of penalties is too administratively burdensome, and (3) SSA management does not encourage the use of penalties. SSA has not acted to either evaluate or address these obstacles. Although SSA has issued program guidance to field office staff emphasizing the importance of assessing penalties, this action alone does not sufficiently address the obstacles cited by staff. SSA’s administrative sanction authority also remains rarely used. SSA sanctions data indicate that between June 2000 and February 2002, SSA field office staff had referred about 3,000 SSI cases to the OIG because of concerns about fraudulent activity. In most instances, OIG returned the referred cases to the field office because they did not meet prosecutorial requirements, such as high amounts of benefits erroneously paid. At this point, the field office, in consultation with a regional office sanctions coordinator, can determine whether benefit sanctions are warranted. Cases referred because of concerns about fraudulent behavior would seem to be strong candidates for benefit sanctions. However, as of January 2002, field staff had actually imposed sanctions in only 21 SSI cases. Our interviews with field staff identified insufficient awareness of the new sanction authority and some confusion about when to impose sanctions. In one region, for example, staff and managers told us that they often referred cases to the OIG when fraud was suspected, but it had not occurred to them that these cases should be considered for benefit sanctions if the OIG did not pursue investigation and prosecution. Enhanced communication and education by SSA regarding the appropriate application of this overpayment deterrent tool may ultimately enhance SSA’s program integrity efforts. Over the past several years, SSA has been working to implement new legislative provisions to improve its ability to recover more SSI overpayments. While a number of SSA’s initiatives have yielded results in terms of increased collections, several actions are still in the early planning or implementation stages and it is too soon to gauge what effect they will have on SSI overpayment collections. In addition, we are concerned that SSA’s current overpayment waiver policies and practices may be preventing the collection of millions of dollars in outstanding debt. In our prior work, we reported that SSA has historically placed insufficient emphasis on recovering SSI overpayments, especially for those who have left the rolls. We were particularly concerned that SSA had not adequately pursued authority to use more aggressive debt collection tools already available to other means-tested benefit programs, such as the Food Stamp Program. Accordingly, SSA has taken action over the last several years to strengthen its overpayment recovery processes. SSA began using tax refund offsets in 1998 to recover outstanding SSI debt. At the end of calendar year 2001, this initiative has yielded $221 million in additional overpayment recoveries for the agency. In the same year, Congress authorized a cross program recovery initiative, whereby SSA was provided authority to recover overpayments by reducing the Title II benefits of former SSI recipients without first obtaining their consent. SSA implemented this cross program recovery tool in March 2002. Currently, about 36 percent of SSI recipients also receive Title II benefits, and SSA expects that this initiative will produce about $115 million in additional overpayment collections over the next several years. In 2002, the agency also implemented Foster Care Independence Act provisions allowing SSA to report former recipients with outstanding SSI debt to credit bureaus as well as to the Department of the Treasury. Credit bureau referrals are intended to encourage individuals to voluntarily begin repaying their outstanding debts. The referrals to Treasury will provide SSA with an opportunity to seize other federal benefit payments individuals may be receiving. While overpayment recovery practices have been strengthened, SSA has not yet implemented some key recovery initiatives that have been available to the agency for several years. Although regulations have been drafted, SSA has not yet implemented administrative wage garnishment, which was authorized in the Debt Collection Improvement Act of 1996. In addition, SSA has not implemented several provisions in the Foster Care Independence Act of 1999. These provisions allow SSA to offset the federal salaries of former recipients, use collection agencies to recover overpayments, and levy interest on outstanding overpayments. In its comments, SSA said that it made a conscious decision to implement first those tools that it judged as most cost effective. It prioritized working on debt collection tools that provide direct collections or that could be integrated into its debt management system. According to SSA, the remaining tools are being actively pursued as resources permit. Draft regulations for several of these initiatives are being reviewed internally. However, agency officials said that they could not estimate when these additional recovery tools will be fully operational. Our work shows that SSI overpayment waivers have increased significantly over the last decade and that current waiver policies and practices may cause SSA to unnecessarily forgo millions of dollars in additional overpayment recoveries annually. Waivers are requests by current and former SSI recipients for relief from the obligation to repay SSI benefits to which they were not entitled. Under the law, SSA field staff may waive an SSI overpayment when the recipient is without fault and the collection of the overpayment either defeats the purpose of the program, is against equity and good conscience, or impedes effective and efficient administration of the program. To be deemed without fault, and thus eligible for a waiver, recipients are expected to exercise good faith in reporting information to prevent overpayments. Incorrect statements that recipients know or should have known to be false or failure to furnish material information can result in a waiver denial. If SSA determines a person is without fault in causing the overpayment, it then must determine if one of the other three requirements also exists to grant a waiver. Specifically, SSA staff must determine whether denying a waiver request and recovering the overpayment would defeat the purpose of the program because the affected individual needs all of his/her current income to meet ordinary and necessary living expenses. To determine whether a waiver denial would be against equity and good conscience, SSA staff must decide if an individual incurred additional expenses in relying on the benefit, and thus requiring repayment would affect his/her economic condition. This could apply to recipients who use their SSI benefits to pay for a child’s medical expenses and are subsequently informed of an overpayment. Finally, SSA may grant a waiver when recovery of an overpayment may impede the effective or efficient administration of the program—for example, when the overpayment amount is equal to or less than the average administrative cost of recovering an overpayment, which SSA currently estimates to be $500. Thus, field staff we interviewed generally waived overpayments of $500 or less. The current $500 threshold was established in December 1993. Prior to that time the threshold was $100. Officials told us that this change was based on an internal study of administrative costs related to investigating and processing waiver requests for SSA’s Title II disability and retirement programs. However, the officials acknowledged that the study did not directly examine the costs of granting SSI waivers. Furthermore, they were unable to locate the study for our review and evaluation. During our field visits, staff and managers had varied opinions regarding the time and administrative costs associated with denying waiver requests. However, staff often acknowledged that numerous automation upgrades over the past several years may be cause for re-examining the current costs and benefits associated with the $500 waiver threshold. Our analysis of several years of SSI waiver data shows that since the waiver threshold was adjusted, waived SSI overpayments have increased by 400 percent from $32 million in fiscal year 1993 to $161 million in fiscal year 2001. This increase has significantly outpaced the growth in both the number of SSI recipients served and total annual benefits paid, which increased by 12 percent and 35 percent, respectively, during the same period (see fig. 1). Furthermore, the ratio of waived overpayments to total SSI collections has also increased (see fig. 2). In fiscal 1993, SSA waived about $32 million in SSI overpayments or about 13 percent of its total collections. By 1995, waiver amounts more than doubled to $66 million, or about 20 percent, of collections for that year. By fiscal year 2001, SSI waivers totaled $161 million and represented nearly 23 percent of all SSI collections. Thus, through its waiver process, SSA is forgoing collection action on a significantly larger portion of overpaid benefits. While not conclusive, the data indicate that liberalization of the SSI waiver policy may be a factor in the dramatic increase in the amount of overpayments waived. SSA has not studied the impact of the increased threshold. However, officials believe that the trend in waived SSI overpayments is more likely due to increases in the number of annual reviews of recipients’ medical eligibility. These reviews have resulted in an increase in benefit terminations and subsequent recipient appeals. During the appeals process, recipients have the right to request that their benefits be continued. Those who lose their appeal can then request a waiver of any overpayments that accrued during the appeal period. SSA will usually grant these requests under its current waiver policies. Another factor affecting trends in waivers may be staff application of waiver policies and procedures. Although, SSA has developed guidance to assist field staff when deciding whether to deny or grant waivers, we found that field staff have considerable leeway to grant waivers based on an individual’s claim that he or she reported information to SSA that would have prevented an overpayment. In addition, waivers granted for amounts less than $2,000 are not subject to second-party review while another employee in the office—not necessarily a supervisor—must review those above $2,000. During our field visits, we identified variation among staff in their understanding as to how waiver decisions should be processed, including the extent to which they receive supervisory review and approval. In some offices, review was often minimal or non-existent regardless of the waiver amount, while other offices required stricter peer or supervisory review. In 1999, SSA’s OIG reported that the complex and subjective nature of SSA’s Title II waiver process, as well as clerical errors and misapplication of policies by staff, resulted in SSA incorrectly waiving overpayments in about 9 percent of 26,000 cases it reviewed. The report also noted that 50 percent of the waivers reviewed were unsupported and the OIG could not make a judgment as to the appropriateness of the decision. The OIG estimated that the incorrect and unsupported waivers amounted to nearly $42 million in benefits. While the OIG only examined waivers under the Title II programs and for amounts over $500, the criteria for granting SSI waivers are generally the same. Thus, we are concerned that similar problems with the application of waiver policies could be occurring in the SSI program. SSA has taken a number of steps to address long-standing vulnerabilities in SSI program integrity. SSA’s numerous planned and ongoing initiatives demonstrate management’s commitment to strike a better balance between meeting the needs of SSI recipients and ensuring fiscal accountability for the program. However, it is too early to tell how effective SSA will ultimately be in detecting and preventing overpayments earlier in the eligibility determination process, improving future payment accuracy rates, and recovering a greater proportion of outstanding debt owed to it. Reaching these goals is feasible, provided that SSA sustains and expands the range of SSI program integrity activities currently planned or underway, such as increasing the number of SSI financial redeterminations conducted each year and developing and implementing additional overpayment detection and recovery tools provided in recent legislation. A fundamental cause of SSI overpayments are the complex rules governing SSI eligibility. However, SSA has done little to make the program less complex and error prone, especially in regard to living arrangement policies. We recognize that inherent tensions exist between simplifying program rules, keeping program costs down, and ensuring benefit equity for all recipients. However, longstanding SSI payment errors and high administrative costs suggest the need for SSA to move forward in addressing program design issues and devising cost-effective simplification options. Furthermore, without increased management emphasis and direction on the use of administrative penalties and benefit sanctions, SSA risks continued underutilization of these valuable overpayment deterrence tools. Finally, rapid growth in the amount of overpayments waived over the last several years, suggest that SSA may be unnecessarily forgoing recovery of significant amounts of overpaid benefits. Thus, it is essential that SSA’s policies and procedures for waiving overpayments and staff application of those policies be managed in a way that ensures taxpayer dollars are sufficiently protected. In order to further strengthen SSA’s ability to deter, detect and recover SSI overpayments, we recommend that the Commissioner of Social Security take the following actions: Sustain and expand the range of SSI program integrity activities underway and continue to develop additional tools to improve program operations and management. This would include increasing the number of SSI redeterminations conducted each year and fully implementing the overpayment detection and recovery tools provided in recent legislation. Identify and move forward in implementing cost-effective options for simplifying complex living arrangement and in-kind support and maintenance policies, with particular attention to those policies most vulnerable to fraud, waste, and abuse. An effective implementation strategy may include pilot testing of various options to more accurately assess their ultimate effects. Evaluate current policies for imposing monetary penalties and administrative sanctions and take action to remove any barriers to their usage or effectiveness. Such actions may include informing field staff on when and how these tools should be applied and studying the extent to which more frequent use deters recipient nonreporting. Reexamine policies and procedures for SSI overpayment waivers and make revisions as appropriate. This should include an assessment of the current costs and benefits associated with the $500 waiver threshold and the extent to which staff correctly apply waiver policies. SSA agreed with our recommendations and said that our report would be very helpful in its efforts to better manage the SSI program. It will incorporate the recommendations into its SSI corrective action plan, as appropriate. SSA also assured us that the SSI program is receiving sustained management attention. In this regard, SSA noted that under the current plan it has assigned specific responsibilities to key staff, monitors agency progress, and reviews policy proposals at regularly scheduled monthly meetings chaired by the Deputy Commissioner. While agreeing with each of our recommendations, SSA supplied additional information to emphasize its actions and commitment to improving SSI program integrity. Regarding simplification of complex program rules, SSA said it will continue to assess various program simplification proposals, but it remains concerned about the distributional effects of potential policy changes. SSA also noted that even minor reductions in SSI benefits could significantly affect recipients. Thus, SSA plans to use sophisticated computer simulations to evaluate the potential impacts of various proposals on recipients. We recognize that simplifying the program will not be easy, but it is still a task that SSA needs to accomplish to reduce its vulnerability to payment errors. With regard to its overpayment waiver policies and procedures, SSA agreed to reexamine its current $500 threshold and analyze the extent to which its staff correctly apply waiver policies. SSA also produced data indicating that increases in SSI waivers over the last several years were attributable to the completion of more continuing disability reviews that result in benefit cessation decisions. Consequently, more recipients appeal these decisions and request that their SSI benefits be continued. Recipients can then request waivers of any overpayments that accrued during the appeal period when a cessation decision is upheld. Our report recognizes SSA’s views on the potential cause for increased waivers. However, we also note that SSI overpayment waiver increases may be attributable to inconsistent application of agency waiver policies. SSA also provided additional technical comments that we have incorporated in the report, as appropriate. The entire text of SSA’s comments appears in appendix II. We are sending copies of this report to the House and Senate committees with oversight responsibilities for the Social Security Administration. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please call me or Daniel Bertoni, Assistant Director, on (202) 512-7215. Other major contributors to this report are Barbara Alsip, Gerard Grant, William Staab, Vanessa Taylor, and Mark Trapani. Social Security Administration: Agency Must Position Itself Now to Meet Challenges. GAO-02-289T. Washington, D.C.: May 2, 2002. Social Security Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-778. Washington, D.C.: June 15, 2001. High Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. Major Management Challenges and Program Risks: Social Security Administration. GAO-01-261. Washington, D.C.: January 2001. Supplemental Security Income: Additional Actions Needed to Reduce Program Vulnerability to Fraud and Abuse. GAO/HEHS-99-151. Washington, D.C.: September 15, 1999. Supplemental Security Income: Long–Standing Issues Require More Active Management and Program Oversight. GAO/T-HEHS-99-51. Washington, D.C.: February 3, 1999. Major Management Challenges and Program Risks: Social Security Administration. GAO/OCG-99-20. Washington, D.C.: January 1, 1999. Supplemental Security Income: Action Needed on Long-Standing Problems Affecting Program Integrity. GAO/HEHS-98-158. Washington, D.C.: September 14, 1998. High Risk Program: Information on Selected High-Risk Areas. GAO/HR-97-30. Washington, D.C.: May 16, 1997. High Risk Series: An Overview. GAO/HR-97-1. Washington, D.C.: February 1997.
The Supplemental Security Income (SSI) program is the nation's largest cash assistance program for the poor. The program paid $33 billion in benefits to 6.8 million aged, blind, and disabled persons in fiscal year 2001. Benefit eligibility and payment amounts for the SSI population are determined by complex and often difficult to verify financial factors such as an individual's income, resource levels, and living arrangements. Thus, the SSI program tends to be difficult, labor intensive, and time consuming to administer. These factors make the SSI program vulnerable to overpayments. The Social Security Administration (SSA) has demonstrated a stronger commitment to SSI program integrity and taken many actions to better deter and detect overpayments. Specifically, SSA has (1) obtained legislative authority in 1999 to use additional tools to verify recipients' financial eligibility for benefits, including strengthening its ability to access individuals' bank account information; (2) developed additional measures to hold staff accountable for completing assigned SSI workloads and resolving overpayment issues; (3) provided field staff with direct access to state databases to facilitate more timely verification of recipient's wages and unemployment information; and (4) significantly increased, since 1998, the number of eligibility reviews conducted each year to verify recipient's income, resources, and continuing eligibility for benefits. In addition to better detection and deterrence of SSI overpayments, SSA has made recovery of overpaid benefits a high priority. Despite these efforts, further improvements in overpayment recovery are possible.
In 1979, the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy (OFPP) issued Policy Letter No. 79-1 to guide federal agencies in implementing Public Law 95-507. The letter provides uniform policy guidance to federal agencies regarding the organization and functions of OSDBUs. In September 1994, the President signed Executive Order No. 12928, entitled “Promoting Procurement With Small Businesses Owned and Controlled by Socially and Economically Disadvantaged Individuals, Historically Black Colleges and Universities, and Minority Institutions.” The order mandates that, unless prohibited by law, the OSDBU director should be responsible to and report directly to the agency head or the agency’s deputy as required by the Small Business Act. The order also mandates that federal agencies comply with the requirements of OFPP’s policy letter, unless prohibited by law. Of the eight federal agencies we reviewed, only DLA’s and DOE’s OSDBU heads do not report to the appropriate agency official. The conference report accompanying Public Law 95-507 states that each federal agency having procurement powers must establish an Office of Small Business Utilization to be directed by an employee of that agency, who would report directly to the head of the agency or the agency’s second-ranking official. Also, OFPP’s policy letter defines the agency’s deputy as the second-ranking official within the agency. Furthermore, in a June 1994 memorandum to federal agencies, the OMB Director defines the agency’s deputy as the second-in-command. The OSDBU directors in the Departments of the Army, the Navy, and the Air Force; NASA; and GSA report to either the agency head or the agency’s deputy. The Army OSDBU director reports to the Secretary of the Army (the agency head), while the Navy and Air Force OSDBU directors report to the Under Secretary of the Air Force and the Under Secretary of the Navy, respectively (the agencies’ second-ranking officials). The NASA OSDBU director reports to the NASA Administrator (the agency head). At GSA, the OSDBU director reports directly to the GSA Deputy Administrator (the agency’s second-in-command). In 1988, Public Law 100-656 amended the Small Business Act, allowing the Secretary of Defense to designate the official to whom the OSDBU director should report. Currently, DOD’s OSDBU director reports to the Under Secretary of Defense for Acquisition and Technology, who is the Secretary’s designee. The OSDBU directors at DLA and DOE report to officials other than the agency head or agency’s deputy. While each agency explained its rationale, we do not believe that in either agency the OSDBU director reports to the appropriate official, as defined by Public Law 95-507. DLA’s OSDBU director reports to the agency’s Deputy Director for Acquisition. As shown in figure 1, the Deputy Director for Acquisition is neither the agency head nor the agency’s deputy. According to DLA’s Deputy General Counsel (Acquisition), the agency’s rationale for the above reporting arrangement is that the Deputy Director for Acquisition is considered to be the agency’s deputy for all matters relating to acquisition. We do not agree with DLA’s rationale. In our view and as shown by the agency’s organizational chart, the Principal Deputy Director is the agency’s second-in-command. In addition, the existing reporting arrangement at DLA could potentially impair the achievement of the act’s objectives. As the House Committee on Small Business observed in a 1987 report, having the OSDBU director report to an individual who has responsibility for the functions that the director is intended to monitor (procurement) could lessen the director’s effectiveness. DLA officials neither agreed nor disagreed with our position that the OSDBU’s reporting level was not in compliance with Public Law 95-507. However, in March 1995, on the basis of questions raised during our review, DLA’s Deputy General Counsel (Acquisition) said that DLA will take steps to reorganize so that the OSDBU director reports to either the agency head or the agency’s deputy. As shown in figure 2, the head of Energy’s OSDBU, whose title is Deputy Director, reports to the Director of the Office of Economic Impact and Diversity. The Director reports directly to the Secretary of Energy but is neither the agency head nor the agency’s second-in-command. Figure 2 reflects DOE’s January 1995 reorganization. Prior to the reorganization, the title of the head of the OSDBU was Director, and that official reported to the Director of Economic Impact and Diversity. In response to our inquiry concerning the rationale for that arrangement, DOE said that the Department of Energy Organization Act (42 U.S.C. 7253) gives the Secretary broad authority to organize the Department and that Public Law 95-507 was not intended to supersede or amend the Organization Act. In response to a congressional request, in 1993 OMB surveyed federal agencies to determine the organizational reporting levels of their OSDBU directors. The OMB survey included four of the agencies we reviewed: DOD, DOE, GSA, and NASA. According to the OFPP Deputy Administrator for Procurement Law and Legislation, DOE was not in compliance with the statute because the OSDBU director did not report to the agency head or the agency’s deputy. In a June 9, 1994, memorandum, OMB’s Director emphasized that federal agencies must comply with the law and policy regarding the OSDBU’s organizational reporting level. Furthermore, in a 1987 report, we stated that DOE’s rationale based on the Organization Act does not give the Secretary the authority to alter or abridge the requirements of the Small Business Act. We recommended that the Secretary of Energy require the head of the OSDBU to be responsible only to, and report directly to, the Secretary or Deputy Secretary of Energy. DOE officials neither agreed nor disagreed with our position that the OSDBU’s reporting level was not in compliance with Public Law 95-507. However, in March 1995, DOE officials—including the Director, Office of Economic Impact and Diversity, and the Assistant General Counsel for General Law—told us that the agency recognizes that it must comply with Executive Order 12928 (which mandates that, unless prohibited by law, the OSDBU director should be responsible to and report directly to the agency head or the agency’s deputy). DOE officials told us that they are currently developing a reorganization plan. DOE’s Assistant General Counsel for General Law said that it is uncertain when or how the reorganization will be accomplished because of a need to reconcile the responsibilities of the OSDBU with DOE’s statutorily mandated Office of Minority Economic Impact. All eight OSDBUs we examined conduct activities consistent with the requirements of Public Law 95-507 and OFPP’s Policy Letter 79-1 for assisting small and disadvantaged businesses in obtaining federal contracts. These activities include (1) developing the agency’s small business contracting and subcontracting goals, (2) sponsoring and/or participating in small business outreach efforts, (3) serving as an interagency liaison for procurement activities relating to small businesses and small disadvantaged businesses, and (4) supervising and training employees involved with the agency’s small business activities. Officials at several OSDBUs also cited examples of special initiatives undertaken to help meet their agency’s contracting goals. As noted above, the Energy OSDBU head reports to the Director of the Office of Economic Impact and Diversity. Because the Diversity Office has broad responsibility for formulating and monitoring the implementation of policies for the agency’s small business, disadvantaged business, and women-owned business programs, many activities are conducted jointly with the OSDBU. For simplicity, in the following sections, we characterize these as the OSDBU’s activities. The Small Business Act and OFPP’s Policy Letter require OSDBU directors to consult with the Small Business Administration (SBA) on establishing contracting goals for small and small disadvantaged businesses. At GSA and NASA, the OSDBU directors and SBA establish goals setting out the percentage of prime contracts and subcontracts that will be awarded to small businesses, small disadvantaged businesses, and women-owned businesses. For DOD, the OSDBU director negotiates DOD-wide prime contracting and subcontracting goals, which incorporate the goals for the component agencies such as DLA and the Departments of the Army, Navy, and Air Force. For DOE, the Office of Economic Impact and Diversity has assumed the responsibility for negotiating the agency’s contracting and subcontracting goals. The process of setting goals begins with OSDBU representatives providing SBA officials with estimates of the total dollar amounts of (1) all prime contracts the agencies anticipate awarding during the forthcoming fiscal year and (2) subcontracts to be awarded by the agencies’ prime contractors. The agencies express the goals in terms of the percentages of the total contract and subcontract dollars to be awarded to small and small disadvantaged businesses. In formulating goals and tracking the agencies’ progress or achievement toward the goals, the OSDBUs also look at the number of contracts awarded and their dollar values. OFPP’s policy letter requires OSDBUs to conduct outreach efforts to provide information to small and disadvantaged businesses. For example, OSDBU’s outreach may consist of sponsoring or participating in seminars or conferences on contracting opportunities and providing materials describing how to do business with the agencies. OSDBU officials at each of the eight agencies told us that they had sponsored or cosponsored conferences or seminars for small businesses during fiscal years 1993 and 1994. In addition, all eight agencies told us that their staffs had attended numerous conferences or seminars sponsored by other government agencies or private organizations. OFPP’s policy letter also requires the OSDBU directors to serve as interagency liaisons for all small business matters. Officials of each of the OSDBUs we reviewed serve in this capacity. For example, in response to the Federal Acquisition Streamlining Act of 1994, OSDBU officials at five of the eight agencies are participating in an interagency group that is drafting revisions to the Federal Acquisition Regulations pertaining to small businesses. Generally, the OSDBUs also serve as their agency’s point of contact for small businesses. All eight of the agencies provide information to individual small businesses in response to inquiries about doing business with them. For example, the information provided includes forecasts of agencies’ acquisitions, contracting procedures, and required forms. Under Public Law 95-507 and/or OFPP’s policy letter, OSDBUs are responsible for supervising and training agency employees in contracting and subcontracting with small businesses. The OSDBUs we reviewed had activities designed to accomplish this requirement. These activities include conducting annual or semiannual training sessions for small business specialists and issuing agency regulations concerning small business procurement matters. Officials of each of the eight OSDBUs said that they have initiated efforts to help meet their agency’s contracting goals. In particular, the GSA and Air Force OSDBUs cited examples that illustrate these efforts. Furthermore, officials of small and minority business associations cited the NASA OSDBU as a model for other federal OSDBUs because of its initiatives to help meet the agency’s goals. GSA’s OSDBU, in conjunction with the agency’s Office of Training and Compliance, has established the Procurement Preference Goaling Program. The program is designed to assist small disadvantaged businesses and women-owned businesses in four industries—travel, manufacturing, automobile sales, and construction—where these businesses have done less well in obtaining federal contracts. For example, the program includes the following: Developing a list of minority- and women-owned automobile dealerships in various geographic areas that can supply a portion of GSA’s automobile fleet purchases. For about 80 percent of the agency’s automobile purchases, the volume of cars required can only be obtained directly from one of the big three automakers. The remaining 20 percent—about $217 million in fiscal year 1993—is small enough that the agency can procure the automobiles from individual dealerships, according to GSA’s OSDBU director. Working with SBA on a pilot project to identify zones where contracts for travel services could be set aside for SBA’s 8(a) program participants. The OSDBU and SBA are currently planning to sponsor a large conference in New Orleans, Louisiana, to solicit applications from 8(a) firms in the travel services field. Compiling lists of small businesses, small disadvantaged businesses, and women-owned businesses that manufacture various goods that the Federal Emergency Management Agency may need during disasters. Attempting to increase construction subcontracting opportunities for small disadvantaged businesses and small women-owned businesses by implementing the Courthouse/Federal Buildings Pilot Program. Under this program, GSA identifies new federal construction projects with an estimated cost of over $50 million and makes special efforts to include small businesses, small disadvantaged businesses, and small women-owned businesses as subcontractors. (GSA has identified one such project in 10 of its 11 regions; no project qualified in one region.) As part of this pilot program, one of the Deputy Directors will be directly involved in the projects and will meet with potential contractors and agency field staff before contracts are issued to ensure that specific language concerning subcontracting is included in solicitations and bid offerings. Also, the prime contractor will be required to report to GSA—on a monthly basis during the procurement phase and quarterly thereafter—on the utilization of the targeted small businesses. According to the Deputy Director, as of February 1995, although the pilot had not yet been formally approved by GSA, two projects—the Tampa Courthouse and the Kansas City Courthouse—were in the initial process stage. The Air Force OSDBU initiated the Small Business and Historically Black Colleges and Universities/Minority Institutions Strategic Planning Workshop in fiscal year 1992. The purpose of the workshop is to increase the participation in the Air Force’s procurement by establishing contracting goals for small business, small disadvantaged business, and minority educational institutions. The workshop is unique for three reasons: (1) The process of goal setting begins 6 months earlier than in other agencies, (2) the OSDBU and field officials meet for a week to develop the goals, and (3) the Air Force develops a set of goals explicitly based on an increased level of effort by agency contracting officials to provide opportunities to small and disadvantaged businesses. The OSDBU also has a project called the East St. Louis Initiative, under which the Air Force OSDBU is working with the city of East St. Louis, Illinois, to help bring contracts to small disadvantaged businesses and jobs to the mostly minority residents. Under this initiative, the Air Force is in a partnership with a national organization called Access America and identifies Air Force contracts that it can obtain to bring manufacturing jobs to this economically depressed area. Access America has obtained a grant to train between 1,100 and 1,500 residents of East St. Louis in aircraft maintenance and aerospace technology. With support from the Air Force Secretary and Chief of Staff, the OSDBU director has assembled a Business Education Team from field and headquarters contracting activities. The team conducts seminars that provide small businesses and small disadvantaged businesses with information on doing business with the Air Force. NASA is required by law to award, to the fullest extent possible, at least 8 percent of the annual total value of its contracts and subcontracts to small businesses or other organizations owned or controlled by socially and economically disadvantaged individuals, including (1) women-owned businesses, (2) historically black colleges and universities, and (3) minority educational associations. NASA targeted fiscal year 1994 to meet the goal. The agency awarded 8.5 percent of its fiscal year 1993 contracting budget to small disadvantaged businesses, and in fiscal year 1994 it awarded 9.9 percent. NASA OSDBU officials attributed the agency’s success to the office’s six-point plan—a strategy for achieving and maintaining compliance with the law’s requirements. The six points include requiring NASA’s top officials—Center Directors and Associate Administrators—to develop a plan for meeting their portion of the agency’s 8-percent goal; requiring the concurrence of the NASA Chief of Staff when consolidating prime contracts that would reduce awards to small disadvantaged businesses; requiring Associate Administrators to take steps to increase subcontracting to small disadvantaged businesses by NASA’s top 100 prime contractors and report these steps to the OSDBU; requiring each NASA center to identify two non-8(a) procurement requirements, of significant dollar value, that could be awarded to small disadvantaged businesses in fiscal year 1993; developing an awards program for technical small business and contracting personnel for their efforts in helping to achieve NASA’s 8-percent goal; and challenging NASA’s Jet Propulsion Laboratory to double its subcontracting in fiscal year 1993. Also, at the urging of its OSDBU, NASA requires that the OSDBU director review all procurement proposals with an estimated value over $25 million for large contracting activities and $10 million for smaller contracting activities, in order to establish a goal for the portion to be subcontracted to small businesses. NASA also established criteria for assessing top-level managers’ assistance to small and disadvantaged businesses. Fiscal year 1993 was the first year the OSDBU provided input for top-level managers’ performance assessment. NASA also has several efforts aimed specifically at high-tech small or minority-owned businesses. In cooperation with SBA and the UNISPHERE Institute, the OSDBU assists firms that have participated in SBA’s 8(a) program to find international venture partners. The UNISPHERE program helps these firms expand their technical and financial capabilities, thus increasing their ability to compete for NASA contracts. In addition, the OSDBU’s New England Outreach Office identifies high-tech minority businesses that are capable of working on NASA contracts and subcontracts. Furthermore, the OSDBU has initiated the High-Tech Small Disadvantaged Business Forum, which permits small disadvantaged businesses to make presentations on their technical capabilities to NASA headquarters and field officials. In fiscal year 1994, 70 percent of the NASA contracts awarded to small disadvantaged businesses were awarded to high-tech firms. The organizational reporting levels of the OSDBU directors at the Defense Logistics Agency and the Department of Energy do not comply with the requirements of Public Law 95-507. By reporting to officials other than the agency head or the agency’s deputy, the OSDBU directors at these agencies may not have access to officials at a high enough level to maximize their effectiveness in assisting small and disadvantaged businesses. Following our review, DLA’s Deputy General Counsel for Acquisition said that the agency will take steps to reorganize so that the OSDBU director reports to either the agency head or the agency’s second-ranking official. DOE’s Director, Office of Economic Impact and Diversity, and the Assistant General Counsel for General Law told us that their agency would comply with Executive Order 12928. However the Deputy General Counsel said that it is uncertain when or how the reorganization will be accomplished because of a need to reconcile the responsibilities of the OSDBU with another statutorily mandated office. We discussed a draft of this report with the OSDBU directors or their designees and staff at each of the eight agencies we reviewed. In addition, we discussed matters related to the OSDBUs’ reporting levels with DLA’s Deputy General Counsel for Acquisition and with DOE’s Assistant General Counsel for General Law. All of the officials generally agreed with the facts presented. We have incorporated the officials’ comments where appropriate. To attain our objectives, we reviewed the Small Business Act, Public Law 95-507, OFPP’s Policy Letter 79-1, and Executive Order No. 12928. We interviewed the directors and other officials of the OSDBU at each of the eight agencies. To obtain the views of small businesses and small disadvantaged businesses concerning OSDBUs’ activities, we also interviewed representatives from two small business associations: the National Minority Suppliers Development Council, Inc., and the National Association of Small Businesses. To determine the reporting levels of the OSDBU directors, we reviewed organizational charts and identified the officials providing performance ratings. In those cases in which the OSDBU varied from the statutory requirement, we obtained the rationale from the agency’s OSDBU and Office of General Counsel officials. To determine what activities the OSDBUs conducted to assist small businesses and small disadvantaged businesses, we reviewed the OSDBUs’ function statements and obtained documentation related to specific activities. We conducted our review from April 1994 through March 1995 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and to other interested parties. We will also make copies available to others on request. Should you or your staff have any questions, you may reach me at (202) 512-7631. Major contributors to this report are listed in appendix I. John T. McGrail, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Office of Small and Disadvantaged Business Utilization (OSDBU) at eight federal agencies, focusing on: (1) whether OSDBU directors report to the required agency official and, if not, the rationale for the deviation; and (2) OSDBU activities to assist small and disadvantaged businesses (SDB) in obtaining federal contracts. GAO found that: (1) except for at the Defense Logistics Agency (DLA) and the Department of Energy (DOE), OSDBU directors report to the appropriate agency official as required; (2) DLA OSDBU director reports to the Deputy Director for Acquisition, since that official is responsible for all contracting matters; (3) DLA plans to have its OSDBU director report either to the DLA head or the DLA principal deputy under its reorganization plan; (4) DOE maintains that its authorizing legislation enables the Secretary of Energy to use discretion in determining the OSDBU director's reporting level; (5) DOE plans to comply with federal OSDBU reporting requirements; (6) OSDBU activities are consistent with legal requirements for assisting SDB in obtaining federal contracts; (7) these activities include developing contracting goals, sponsoring or participating in outreach efforts, being an interagency liaison for small business procurement activities, and supervising and training agency staff who work with small businesses; and (8) several agency OSDBU have undertaken additional initiatives to promote SDB participation in federal contracting.
Perchlorate is a primary ingredient in solid rocket propellant and has been used for decades by DOD, NASA, and the defense industry in the manufacturing, testing, and firing of rockets and missiles. On the basis of 1998 manufacturer data, EPA estimated that 90 percent of the perchlorate produced in the United States is manufactured for use by the military and NASA. Total typical production quantities average several million pounds per year. Private industry has used perchlorate to manufacture products such as fireworks, flares, automobile airbags, and commercial explosives. Perchlorate is a salt, both manufactured and naturally occurring, and is easily dissolved and transported in water. It has been found in drinking water, groundwater, surface water, and soil across the country. There is no national primary drinking water regulation for perchlorate. In 1992 and again in 1995, EPA established a provisional reference dose range for perchlorate of 0.0001 to 0.0005 milligrams per kilogram of body weight per day. This converts to a drinking water concentration of between 4 and 18 parts per billion. On the basis of the drinking water conversion, EPA identified a corresponding provisional cleanup level for perchlorate of between 4 and 18 parts per billion. Perchlorate was initially identified as a contaminant of concern by EPA in 1985, when it was found in wells at hazardous waste sites in California. Perchlorate became a chemical of regulatory concern in 1997 after California found perchlorate in the groundwater near Aerojet, a rocket manufacturer in Rancho Cordova. At the time, perchlorate could not reliably be detected below 400 parts per billion in water. In April 1997, a new analytical method capable of detecting perchlorate in drinking water at concentrations of 4 parts per billion became available. This development prompted several states to test drinking water, as well as groundwater and surface water, for perchlorate. Within 2 years, perchlorate had been detected in drinking water in 3 western states and groundwater and surface water in 11 states across the United States. Perchlorate in drinking water is considered a more immediate concern. In light of emerging concerns about perchlorate, EPA published in 1998 its first draft risk assessment on the environmental risks of perchlorate exposure. In February 1999, an external panel of independent scientists reviewed EPA’s draft risk assessment and recommended additional studies and analyses to provide more data on perchlorate and its health effects. DOD and industry researchers conducted laboratory and field studies of the health effects of perchlorate and submitted them to EPA. On the basis of an analysis of these studies, EPA revised its draft perchlorate risk assessment and released it for peer review and public comment in January 2002. The revised draft risk assessment included a proposed reference dose equivalent to a concentration of 1 part per billion in drinking water, if it is assumed all exposure comes only from drinking water. After a second panel peer review, and some disagreement about the proposed reference dose, EPA, DOD, NASA, and the Department of Energy asked NAS, in 2003, to review EPA’s perchlorate risk assessment and key studies of the health effects of perchlorate. These and other recent health studies have shown that the consumption of perchlorate affects the human thyroid by decreasing the amount of iodine absorbed. Iodine deficiency can result in developmental delays if it occurs during pregnancy and early infancy and can result in hypothyroidism if it occurs during adulthood. The purpose of the NAS study was, in part, to assess the extent to which studies have shown negative health effects from perchlorate. In January 2005, NAS reported that existing studies did not support a clear link between perchlorate exposure and developmental effects, and NAS recommended additional research on perchlorate exposure and its effect on children and pregnant women. NAS also recommended a safe exposure level, or reference dose, for perchlorate of 0.0007 milligrams per kilogram of body weight per day. (For comparison, EPA’s draft reference dose for perchlorate in its 2002 draft risk assessment, which equated to a drinking water concentration of 1 part per billion, was based on a daily dose of 0.00003 milligrams per kilogram of body weight per day.) According to NAS, the reference dose is conservative and includes safeguards to protect the most sensitive population, the fetus of the nearly iodine-deficient pregnant woman. In February 2005, EPA established a new reference dose for perchlorate on the basis of the NAS recommendation. The new reference dose is equivalent to 24.5 parts per billion in drinking water, assuming that an adult weighing 70 kilograms (or 154 pounds) consumes 2 liters of drinking water per day, and that all perchlorate ingested comes from drinking water. If EPA establishes a drinking water standard for perchlorate, however, it may be less than 24.5 parts per billion because humans may consume perchlorate from other sources, such as produce and milk. In addition to studies of perchlorate and health effects, other federal agencies, research groups, and universities have conducted or are conducting studies of perchlorate found in food and the environment. For example, the U.S. Geological Survey collected soil samples from California and New Mexico to test for the presence of perchlorate in natural minerals and materials. In 2003, an environmental research group reported that it sampled lettuce purchased in northern California and found perchlorate above 30 parts per billion in 4 of 22 samples. In September 2003, researchers from Texas Tech University sampled 8 bottles of milk and 1 can of evaporated milk and found perchlorate concentrations up to 6 parts per billion in seven of the milk samples and more than 1 part per billion in the evaporated milk sample. In 2004, the Food and Drug Administration sampled the following items for perchlorate: lettuce, bottled water, milk, tomatoes, carrots, cantaloupe, and spinach. Produce samples were taken from areas where officials said they believed irrigation water contained perchlorate. These data are currently being evaluated, but preliminary results show perchlorate was found in some samples. Method 314.0 is the EPA-approved method for analyzing perchlorate in drinking water under the Safe Drinking Water Act. Method 314.0 can detect perchlorate concentrations of 1 part per billion in finished (treated) drinking water but has a minimum reporting limit of 4 parts per billion. Both EPA and DOD officials have expressed concerns about using Method 314.0 to test for perchlorate in media other than drinking water, such as groundwater, surface water, and soil (where researchers mix soil with a liquid to extract the sample). According to EPA, sediment and dissolved ions commonly found in groundwater and surface water can yield false positive results if the method is not used properly. Analysis methods other than Method 314.0 are available, and EPA has approved their use to analyze specific sites for perchlorate. Further, two new methods have been developed for the analysis of perchlorate in drinking water, and another is expected to be available in the spring of 2005. These three methods have minimum reporting limits ranging from 0.02 to 0.1 parts per billion. However, Method 314.0 has been the principal method used to test and report on the presence of perchlorate in all media, including soil, sediment, groundwater, and surface water. Various treatment technologies to remove perchlorate from groundwater and surface water are in use or under review. Biological treatment and ion exchange systems are among the technologies currently in use. Biological treatment uses microbes to destroy perchlorate by converting the perchlorate ion to nontoxic ions, oxygen, and chloride. Ion exchange systems replace the perchlorate ion with chloride, which is an ion found in table salt. Several federal environmental laws provide EPA, and states authorized by EPA, with broad authorities to respond to actual or threatened releases of substances that may endanger public health or the environment. For example, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes EPA to investigate the release of any hazardous substance, pollutant, or contaminant. The Resource Conservation and Recovery Act of 1976 (RCRA) gives EPA authority to order a cleanup of hazardous waste when there is an imminent and substantial endangerment to public health or the environment, and one federal court has ruled that perchlorate is a hazardous waste under RCRA. The Clean Water Act’s National Pollutant Discharge Elimination System (NPDES) provisions authorize EPA, which may, in turn, authorize states, to regulate the discharge of pollutants into waters of the United States. These pollutants may include contaminants such as perchlorate. The Safe Drinking Water Act authorizes EPA to respond to actual or threatened releases of contaminants into public water systems or underground sources of drinking water, regardless of whether the contaminant is regulated or unregulated, where there is an imminent and substantial endangerment to health and the appropriate state and local governments have not taken appropriate actions. Under certain environmental laws such as RCRA, EPA can authorize states to implement the requirements as long as the state programs are at least equivalent to the federal program and provide for adequate enforcement. A detailed summary of these and other laws and regulations is presented in appendix IV. In addition, some states have their own environmental and water quality laws that provide state and local agencies with the authority to monitor, sample, and require cleanup of various hazardous substances, both regulated and unregulated, that pose an imminent and substantial danger to public health. For example, the California Water Code authorizes Regional Water Control Boards to require sampling of waste discharges and to direct cleanup and abatement, if necessary, of any threat to water, which may include the release of a contaminant such as perchlorate. DOD’s September 2003 interim policy on perchlorate sampling states that the military services shall sample for perchlorate where service officials suspect the presence of perchlorate on the basis of prior or current DOD activities, and where a complete human exposure pathway is likely to exist. The policy also states that the services shall sample for perchlorate (1) as required by the Safe Drinking Water Act’s Unregulated Contaminant Monitoring Regulation and the Clean Water Act’s NPDES program and (2) as part of cleanup conducted under DOD’s Environmental Restoration Program. While DOD’s policy requires it to sample where the two conditions of release and exposure are met, it does not specify whether the services may sample for perchlorate when requested by state agencies or EPA, apart from requirements under environmental laws and regulations. Further, except for at a few sites,DOD has not independently directed the services to clean up perchlorate. We previously reported that DOD has cleaned up perchlorate when directed to do so by EPA or a state environmental agency under various environmental laws, or when perchlorate is found on closed ranges. Various federal and state agencies have reported finding perchlorate at almost 400 sites in 35 states, the District of Columbia, and 2 commonwealths of the United States in drinking water, surface water, groundwater, and soil. Perchlorate was found at a variety of sites including public water systems, private wells, military installations, commercial manufacturers, and residential areas. The concentration levels reported ranged from 4 parts per billion to more than 3.7 million parts per billion in groundwater at 1 site, yet roughly two-thirds of sites had concentration levels at or below 18 parts per billion, the upper limit of EPA’s provisional cleanup guidance for perchlorate. Federal and state agencies are not required to routinely report perchlorate findings to EPA, and EPA does not currently have a formal process to centrally track or monitor perchlorate detections or the status of a cleanup. As a result, a greater number of sites may exist in the United States than is presented in this report. Through discussions with federal and state environmental agency officials and a review of perchlorate sampling reports, we identified 395 sites in the United States and its commonwealths where perchlorate was found in drinking water, groundwater, surface water, sediment, or soil. A table of reported perchlorate detections in the United States and its commonwealths as of January 2005 is presented in appendix II. Most of the sites and the highest levels of perchlorate were found in a small number of states. More than one-half of all sites, or 224, was found in Texas and California, where both states have conducted broad investigations to determine the extent of perchlorate. The highest perchlorate concentrations were found in 5 states—Arkansas, California, Nevada, Texas, and Utah—where 11 sites had concentrations exceeding 500,000 parts per billion. However, the majority of the 395 sites had lower levels of perchlorate. We found 249 sites where the highest concentration was equal to or less than 18 parts per billion, the upper limit of EPA’s provisional cleanup level, and 271 sites where the highest concentration was less than 24.5 parts per billion, the drinking water concentration equivalent calculated on the basis of EPA’s newly established reference dose (see fig. 1). According to EPA and state agency officials, perchlorate found at 110 of the sites was due to activities related to defense and aerospace, such as propellant manufacturing, rocket motor research and test firing, or explosives disposal. At 58 sites, officials said the source of the perchlorate found was manufacturing and handling, agriculture, and a variety of commercial activities such as fireworks and flare manufacturing (see fig. 2). At the remaining 227 sites, EPA and state agency officials said the source of the perchlorate found was either undetermined or naturally occurring. Further, all 105 sites with naturally occurring perchlorate are located in the Texas high plains region where perchlorate concentrations range from 4 to 59 parts per billion. and 2003 under the Safe Drinking Water Act’s Unregulated Contaminant Monitoring Regulation, 3,722 public drinking water systems had sampled drinking water and reported the results to EPA. Of these public drinking water systems, 153, or about 4 percent, reported finding perchlorate. Located across 26 states and 2 commonwealths, these 153 sites accounted for more than one-third of the sites we identified, where perchlorate concentrations reported ranged from 4 parts per billion to 420 parts per billion and averaged less than 10 parts per billion. Only 14 of the 153 public drinking water systems had concentration levels above 24.5 parts per billion, the drinking water equivalent calculated on the basis of EPA’s revised perchlorate reference dose. California had the most public water systems with perchlorate, where 58 systems reported finding perchlorate in drinking water. The highest drinking water perchlorate concentration of 420 parts per billion was found in Puerto Rico in 2002. Subsequent sampling in Puerto Rico did not find any perchlorate, and officials said the source of the initial finding was undetermined. Because of the proximity of these 153 public water systems to populated areas, an EPA official estimated that about 10 million people may have been exposed to perchlorate through their drinking water. EPA officials told us that they do not know the source of most of the perchlorate found in public water systems, but that perchlorate found in 32 water systems in Arizona, California, and Nevada was likely due to previous perchlorate manufacturing in Nevada. Regional EPA and state officials told us they did not plan to clean up perchlorate found at public drinking water sites pending a decision to establish a drinking water standard for perchlorate. In some cases, officials did not plan to clean up because subsequent sampling was unable to confirm that perchlorate was present. EPA officials said the agency does not centrally track or monitor perchlorate detections, or the status of cleanup activities, other than under the Safe Drinking Water Act where EPA collected data from public water systems for 1 year. As a result, it is difficult to determine the extent of perchlorate in the United States. EPA maintains a listing of sites known to EPA where cleanup or other response actions are under way, but the list does not include all sites because some sites have not been reported to EPA. As a result, EPA officials said they did not always know whether other federal and state agencies found perchlorate because, as is generally the case with unregulated contaminants, there is no requirement for states or other federal agencies to routinely report perchlorate findings to EPA. For example, except as required under specific environmental programs, DOD is not required to report to EPA when perchlorate is found on active installations and facilities. Consequently, EPA region officials in California said they did not know that the Department of the Navy found perchlorate at the Naval Air Weapons Station at China Lake. Further, even where EPA has authorized states to implement the RCRA program, states are not required to routinely notify EPA about perchlorate found under the program. For example, EPA region officials in California said the Nevada state agency did not tell them perchlorate was found at Rocketdyne, an aerospace facility in Reno, or that it was being cleaned up. EPA only learned about the perchlorate finding when the facility’s RCRA permit was renewed. We also found that communication and data sharing between EPA and state agency officials varied. Because states are not required to routinely notify EPA about perchlorate, some EPA region officials told us they contacted state agencies to ask whether new sites had been found. Some EPA region and state officials told us they participated in monthly or quarterly meetings to discuss perchlorate, and most EPA and state officials told us they had good working relationships and shared information about perchlorate. Yet a few EPA region officials told us they did not always know whether states found perchlorate, at what levels, or what actions were taken. For example, an EPA region official told us he did not know what actions were taken at three RCRA sites in Utah where perchlorate was found. Although there is no federal standard for perchlorate in drinking water or a federal cleanup standard, EPA and state environmental agencies authorized by EPA have investigated suspected sites; collected samples and analyzed for perchlorate; and, when perchlorate is found, cleaned up or limited perchlorate releases under broad authorities found in various federal environmental laws and regulations. Further, both EPA and authorized states have required responsible parties to sample and clean up perchlorate under other state laws. Most responsible parties sampled and cleaned up when required by regulation or directed by EPA or states. DOD sampled and cleaned up on the basis of its interpretation of federal and state legal requirements and its own policy. Of the 395 sites where perchlorate has been found, EPA or state environmental officials told us cleanup is under way or planned at 51 of them. We found EPA and state environmental agencies have investigated, sampled, and cleaned up perchlorate, or have required sampling and cleanup, pursuant to general authorities contained in various federal and state environmental laws and regulations. According to EPA and state agency officials, state agencies have also established levels for sampling and cleanup, and some state environmental laws provide that other authorities are to respond to contaminant releases, including perchlorate. Both EPA and state environmental agencies have used federal environmental laws, such as CERCLA, RCRA, and the NPDES provisions of the Clean Water Act, as authority to respond to releases of substances that may endanger public health or the environment, including perchlorate. EPA and the states have used such authority to sample and clean up as well as require the sampling and cleanup of perchlorate. For example: As part of a CERCLA review, EPA sampled groundwater near former government-owned grain storage facilities in Iowa and found perchlorate in residential and commercial drinking water wells at three sites. During subsequent sampling, EPA did not find perchlorate at two of the sites but confirmed perchlorate at the third site. EPA is providing bottled drinking water to certain persons until an uncontaminated drinking water supply becomes available. During sampling required as part of a RCRA permit, ATK Thiokol, a Utah explosives and rocket fuel manufacturer, found perchlorate. Under authority provided by RCRA, Utah required the manufacturer to install a monitoring well to determine the extent of perchlorate and take steps to prevent additional perchlorate releases. Under the NPDES program, Texas required the Navy to reduce perchlorate levels in wastewater discharges at the McGregor Naval Weapons Industrial Reserve Plant to 4 parts per billion, the lowest level at which perchlorate could be detected. According to EPA and state officials, EPA and state environmental agencies have investigated and sampled groundwater and surface water areas for perchlorate, or requested that responsible parties or others do so, pursuant to agency oversight responsibilities to protect water quality and human health. For example: EPA plans to sample five waste disposal sites in Niagara Falls, New York, to determine whether the groundwater contains perchlorate from manufacturing that took place in the area between 1908 and 1975. EPA asked Patrick Air Force Base and the Cape Canaveral Air Force Station, Florida, to sample groundwater for perchlorate near rocket launch sites. Previously, both installations inventoried areas where perchlorate was suspected and conducted limited sampling. DOD officials did not find perchlorate at Patrick Air Force Base, and, according to an EPA official, the Department of the Air Force said it would not conduct additional sampling at either installation until there is a federal standard for perchlorate. Between 1998 and 2002, Utah sampled public drinking water systems considered at risk for the presence of perchlorate because of nearby perchlorate use and found perchlorate concentrations at more than 42 parts per billion in three wells at two sites. Texas contracted with Texas Tech University to sample drinking water wells for perchlorate in 54 counties after perchlorate was found in five public water systems in the high plains region of the state. The university study found perchlorate in some drinking water wells and concluded that the most likely source was natural occurrence. When perchlorate was found, according to state and EPA officials, state agencies have taken steps to minimize human exposure or perform cleanup, or required responsible parties to do so, pursuant to the same general authorities contained in federal environmental laws and regulations. For example: Nevada is requiring Pepcon, a former perchlorate manufacturing site, to install a cleanup system to remove perchlorate from groundwater. Massachusetts closed a public well and provided bottled drinking water to students at a nearby school when perchlorate was found in a city public water system. At the request of California, United Technologies, a large rocket testing facility in Santa Clara County, stopped releasing perchlorate and cleaned up perchlorate found in the groundwater. Without a federal standard for perchlorate, according to EPA and state officials, at least nine states have established nonregulatory action levels or advisories for perchlorate ranging from under 1 part per billion to 18 parts per billion. States that have sampled, or required responsible parties to sample, report, and clean up, have used these advisories as the levels at which action must be taken. For example: Oregon initiates in-depth site studies to determine the cause and extent of perchlorate when concentrations of 18 parts per billion or greater are found. Nevada required the Kerr-McGee Chemical site in Henderson to treat groundwater and reduce perchlorate concentration releases to 18 parts per billion, which is Nevada’s action level for perchlorate. According to Utah officials, Utah does not have a written action level for perchlorate, but, if perchlorate concentrations exceed 18 parts per billion, the state may require the responsible party to clean up. Finally, in addition to state laws enacted to allow states to assume responsibility for enforcing federal environmental laws, other state environmental laws provide authority to respond to contaminant releases, including perchlorate. For example, EPA and state officials told us that both California and Nevada state agencies have required cleanup at some sites under state water quality laws. According to EPA and state officials, private industry and public water suppliers have generally complied with regulations requiring sampling, such as those under (1) the RCRA and NPDES permit programs, where responsible parties have been required to sample and report hazardous releases to state environmental agencies, or (2) the Safe Drinking Water Act’s Unregulated Contaminant Monitoring Regulation, which required sampling for unregulated contaminants, such as perchlorate, between 2001 and 2003. Further, according to EPA and state officials, private industry has generally responded by reducing perchlorate and cleaning up when required by regulation or directed by EPA or state agencies. DOD’s perchlorate sampling policy requires the military services to sample where the particular installation must do so, under laws or regulations such as the Clean Water Act’s NPDES permit program, or where a reasonable basis exists to suspect that a perchlorate release has occurred as a result of DOD activities and that a complete human exposure pathway is likely to exist. However, DOD’s policy on perchlorate sampling does not address cleanup. We found DOD has sampled for perchlorate on closed installations when requested by EPA or a state agency and cleaned up on active and closed installations when required by a specific environmental law, regulation, or program, such as the environmental restoration program at formerly used defense sites. For example, at EPA’s request, the U.S. Army Corps of Engineers (Corps) installed monitoring wells and is sampling for perchlorate at Camp Bonneville, a closed installation near Vancouver, Washington. Utah state officials told us DOD is removing soil containing perchlorate at the former Wendover Air Force Base in Utah, where the Corps found perchlorate in 2004. According to EPA and state officials, DOD has been reluctant to (1) sample on or near active installations because there is no specific federal regulatory standard for perchlorate or (2) sample where DOD determined the criteria to sample were not met as outlined in its policy. Except where there is a legal requirement to sample at a particular installation, DOD’s perchlorate policy does not require sampling unless the two conditions of release and exposure are met. Utah state officials told us the agency asked the Department of the Army to sample for perchlorate at two active installations, Dugway Proving Grounds and Deseret Chemical Depot. Previously, in 1998, the Army reported that perchlorate had been used at Dugway for more than 20 years. According to state agency officials, the Army said there was not a clear potential for human exposure to perchlorate at these sites, and it would not sample unless a higher Army level approved the sampling. In February 2005, Utah officials told us Dugway Proving Grounds had not requested permission from Army headquarters to sample, and they did not know whether Deseret requested permission to sample. In fiscal years 2004 and 2005, several provisions to federal law were enacted that encourage DOD to conduct health studies and evaluate perchlorate found at military sites. For example, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 states that the Secretary of Defense should develop a plan for cleaning up perchlorate resulting from DOD activities, when the perchlorate poses a health hazard, and continue evaluating identified sites. In October 2004, DOD and California agreed to a procedure for prioritizing perchlorate sampling at DOD facilities in California. The procedure includes steps to identify and prioritize the investigation of areas on active installations and military sites (1) where the presence of perchlorate is likely based on previous and current defense-related activities and (2) near drinking water sources where perchlorate was found. Although DOD has been urged by Congress to evaluate sites where the presence of perchlorate is suspected, DOD’s September 2003 perchlorate policy continues to require sampling on active installations only where there is a suspected release due to DOD activities and a likely human exposure pathway, or where required under specific laws, such as the Clean Water Act. EPA, state agencies, and responsible parties are cleaning or planning to clean up at 51 of the 395 sites we identified. At 23 sites, EPA, states, and responsible parties are cleaning up or working to reduce perchlorate releases. For example, EPA required several defense, petroleum, and other companies to clean up perchlorate in Baldwin Park, California, a CERCLA site. The cleanup involves extracting and treating up to 26 million gallons of water per day, after which the water is distributed to several nearby communities. Texas required Longhorn Army Ammunition Plant, a closed DOD facility, to clean up by limiting perchlorate releases to a daily average concentration of 4 parts per billion (and a maximum of 13 parts per billion per day) under the NPDES program. Kerr-McGee Chemical, a former perchlorate manufacturer in Nevada, is cleaning up using an ion exchange system. According to officials, Nevada required the facility to clean up perchlorate under a state water law after perchlorate concentrations up to 3.7 million parts per billion were found in the groundwater. At 28 sites, EPA and state agency officials told us that federal and state governments and private parties are evaluating the extent of perchlorate and potential cleanup methodologies. Unidynamics, an Arizona propellant manufacturer located at a CERCLA site, responded to EPA’s concern about perchlorate at the site and is investigating perchlorate treatment methods. According to officials, after Kansas asked Slurry Explosives to clean up perchlorate under a state environmental law, the manufacturer began investigating a biological method to clean up. The remaining 344 sites are not being cleaned up for a variety of reasons. The reason most often cited by EPA and state officials was that they were waiting for a federal requirement to do so. In some instances, officials said they would not clean up sites where perchlorate was naturally occurring or where subsequent sampling was unable to find perchlorate. Since 1998, EPA and DOD have sponsored a number of studies of the health risks of perchlorate using experimental, field study, and data analysis methods. We reviewed 90 of these studies and found that 44 offered conclusions or observations on whether perchlorate had a health effect. Of these, 26 studies found that perchlorate had an adverse effect. However, in some of these studies, it was unknown whether the observed adverse effects would be reversible over time. In January 2005, NAS issued its report on EPA’s draft health assessment and the potential health effects of perchlorate. The NAS report considered many of the same health risk studies that we reviewed and concluded that an exposure level higher than initially recommended by EPA may not adversely affect a healthy adult, but recommended more study of the effects of perchlorate on pregnant women and children. DOD, industry, and EPA sponsored the majority of the 90 health studies we reviewed; the remaining studies were conducted by academic researchers and other federal agencies. Of these 90 studies, 49 used an experimental design methodology to determine the effects of perchlorate on humans, mammals, fish, and/or amphibians by exposing these groups to differing dose amounts of perchlorate over varied periods of time and comparing the results with other groups that were not exposed. Twelve were field studies that compared humans, mammals, fish, and/or amphibians in areas known to be contaminated with the same groups in areas known to be uncontaminated. Both methodologies have limitations; that is, the experimental studies were generally short in duration, and the field studies were generally limited by the researchers’ inability to control whether, how much, or how long the population in the contaminated areas was exposed. Finally, 29 studies used a data analysis methodology where researchers reviewed several publicly available human and animal studies and used data derived from these studies to determine the process by which perchlorate affects the human thyroid and the highest exposure levels that did not adversely affect humans. The 3 remaining studies used another or unknown methodology. Appendix III provides data on these studies, including who sponsored them; what methodologies were used; and, where presented, the author’s conclusions or findings on the effects of perchlorate. Many of the studies we reviewed contained only research findings, not conclusions or observations, on the health effects of perchlorate. Only 44 studies had conclusions on whether perchlorate had an adverse effect. Of these, 29 studies evaluated the effect of perchlorate on development, and 18 found adverse effects resulting from maternal exposure to perchlorate. Adverse effects of perchlorate on the adult thyroid are difficult to evaluate because they may happen over longer time periods than can be observed in a research study. However, the adverse effects of perchlorate on development can be more easily studied and measured within study time frames. Moreover, we found different studies used the same perchlorate dose amount but observed different effects. The different effects were attributed to variables such as the study design type or age of the subjects, but the precise cause of the difference is unresolved. Such unresolved questions are one of the bases for the differing conclusions in EPA, DOD, and academic studies on perchlorate dose amounts and effects. According to EPA officials, the most sensitive population for perchlorate exposure is the fetus of a pregnant woman who is also nearly iodine-deficient. However, none of the 90 studies we reviewed considered this population. Some studies reviewed pregnant rat populations and the effect on the thyroid, but we did not find any studies that considered perchlorate’s effect on nearly iodine-deficient pregnant populations and the thyroid. In January 2005, NAS issued its report on EPA’s draft health assessment and the potential health effects of perchlorate. NAS reported that although perchlorate affects thyroid functioning, there was not enough evidence to show that perchlorate causes adverse effects at the levels found in most environmental samples. Most of the studies NAS reviewed were field studies, the report said, which are limited because they cannot control whether, how much, or how long a population in a contaminated area is exposed. NAS concluded that the studies did not support a clear link between perchlorate exposure and changes in the thyroid function in newborns and hypothyroidism or thyroid cancer in adults. In its report, NAS noted that only 1 study examined the relationship between perchlorate exposure and adverse effects on children, and that no studies investigated the relationship between perchlorate exposure and adverse effects on vulnerable groups, such as low-birth-weight infants. NAS concluded that an exposure level higher than initially recommended by EPA may not adversely affect a healthy adult. The report did not recommend a drinking water standard; however, it did recommend that additional research be conducted on perchlorate exposure and its effect on children and pregnant women. Perchlorate has been found in the groundwater, surface water, drinking water, or soil in 35 states, the District of Columbia, and 2 commonwealths of the United States where concentrations reported ranged from 4 parts per billion to millions of parts per billion. According to EPA and state environmental agency officials, a leading known cause of the perchlorate found was defense-related activities. In addition, EPA and state officials attributed the cause of the perchlorate found at more than one-half of sites to natural occurrence or undetermined sources. State and other federal agencies do not always report perchlorate detections to EPA, however, because EPA, other federal agencies, and the states do not have a standardized approach for reporting perchlorate data nationwide. As a result, a greater number of sites with perchlorate may already exist. Further, EPA does not track the status of cleanup at sites where perchlorate has been found. Without a formal system to track and monitor perchlorate findings and cleanup activities, EPA and the states do not have the most current and complete accounting of perchlorate as an emerging contaminant of concern, including the extent of perchlorate found and the extent or effectiveness of cleanup projects. In order to ensure that EPA has reliable information on perchlorate and the status of cleanup efforts, and to better coordinate lessons learned between federal agencies and states on investigating and cleaning up perchlorate, we recommend that, in coordination with states and other federal agencies, EPA use existing authorities or seek additional authority, if necessary, to establish a formal structure to centrally track and monitor perchlorate detections and the status of cleanup efforts across the federal government and state agencies. In its April 26, 2005, letter (see app. V), EPA agreed with our findings and conclusions on the extent of perchlorate in the United States and that defense-related activities have been found to be associated with perchlorate detections. However, EPA did not agree with our recommendation that it establish a formal structure to centrally track and monitor perchlorate detections and the status of cleanup efforts across the federal government and state agencies. In its letter, EPA stated that it already had significant information and data on perchlorate concentrations in various environmental media, where much of the information was provided by other federal and state agencies as well as private parties. EPA also asserted that the development and maintenance of a new tracking system would require additional resources or the redirection of resources from other activities. To justify a tracking system, EPA would have to analyze its associated costs and benefits. As our report explains, however, state and other federal agencies do not always report perchlorate detections to EPA. Further, without a formal system to track and monitor perchlorate findings and cleanup activities, EPA does not have the most current and complete accounting of perchlorate as an emerging contaminant of concern. To underscore our point, in commenting on a draft of this report, DOD provided a listing of four sites where it found perchlorate between 2000 and 2004. These sites were not in EPA’s database. (We added these sites to our listing in app. II.) With regard to the cost benefit aspect of EPA’s comments, we believe that EPA is misconstruing the extent of work necessary to implement a more formalized and structured system to track perchlorate. We are not proposing an elaborate new system but, instead, believe that EPA needs to work toward a more structured process than what is currently in place to track and monitor perchlorate routinely. Currently, EPA’s regions are spending time and effort contacting their counterparts in other federal agencies and states on an ad hoc basis to obtain more current information about perchlorate. However, this is being done without any structure or consistency related to how and when contacts are made, how frequently they are made, or what specific information is collected. As a result, we found that EPA does not have complete, current, or accurate information to track the occurrence of perchlorate—the type of information that would be needed when making a determination about the need for regulation. We continue to believe that such information is necessary and that it can be obtained without an elaborate or costly undertaking. In contrast to EPA’s view of our report’s accuracy, DOD said in its April 26, 2005, letter (see app. VI), that our report did not provide an accurate assessment of perchlorate issues and activities. DOD asserted that our report mischaracterized DOD’s response to perchlorate and cited examples of where DOD has sampled and invested in cleanup technologies, even though perchlorate is currently unregulated. We disagree with DOD’s position. Our report credits DOD with actions it has taken but also points out where DOD has not acted. For example, our report acknowledges that DOD is sampling for perchlorate as required under various environmental laws, or when certain criteria exist as specified in DOD’s sampling policy; that is, where the presence of perchlorate is suspected based on prior or current DOD activities and a complete exposure pathway to humans is likely to exist. While DOD states that it has a policy that establishes an affirmative obligation to sample and not a limitation, that view is not shared by some regulators. As we point out in our report, there have been a number of instances where EPA or state agencies asked the services to sample but service officials declined because they did not believe the conditions met with DOD’s sampling policy. As such, DOD has used its policy to limit testing for perchlorate that environmental regulators believed was necessary. With regard to DOD’s point that perchlorate is unregulated, we are well aware that many other contaminants, like perchlorate, are not specifically regulated, yet are being addressed and cleaned up as hazards under various environmental laws. DOD also stated that we did not accurately summarize the findings of the NAS study and other scientific and technical data. We believe our report accurately summarizes key information from both NAS as well as 90 other studies of the potential health risks of perchlorate, as specified by the requester of this report. Finally, DOD disagreed with our recommendation that EPA establish a more formal structure to centrally track and monitor perchlorate because it was not clear that such a system will provide added value. DOD stated that it will continue to share its information on perchlorate. As previously noted, in commenting on this report, DOD provided information on four locations where perchlorate has been found, in one case as long as 5 years ago, and which do not appear on EPA’s list of perchlorate detection sites. Whether this omission occurred as a result of a DOD or an EPA oversight is unknown, but it underscores the need for a more structured and formalized system. Both EPA and DOD provided technical comments as enclosures to their letters, which we incorporated in our report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Administrator, Environmental Protection Agency; the Secretary of Defense; and other interested parties. We will also provide copies to others upon request. In addition, the report will be available, at no charge, on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me or Edward Zadjura at (202) 512-3841. Contributors to this report are listed in appendix VII. We identified (1) the estimated extent of perchlorate nationwide; (2) what actions the federal government, state governments, and responsible parties have taken to clean up or eliminate the source of perchlorate found; and (3) what studies of the potential health risks from perchlorate have been conducted and, where presented, the author’s conclusions or findings on the health effects of perchlorate. To provide an estimate of the extent of perchlorate in the United States, we compiled and analyzed data on perchlorate detections from the Environmental Protection Agency (EPA), the Department of Defense (DOD), the U.S. Geological Survey, and state agencies. For each site, our review shows the highest perchlorate concentration reported for all media sampled as of January 2005, although officials may have sampled the site more than once, in varying locations and media, and found differing levels of perchlorate. We also interviewed officials from EPA headquarters and regional offices, DOD, and selected state agencies to determine the accuracy and completeness of our compiled list of perchlorate detections. To identify what actions the government and private sector have taken to address perchlorate and the extent to which responsible parties have taken action to clean up and eliminate the source of perchlorate, we reviewed federal and state laws, regulations, and policies on water quality and environmental cleanup and interviewed EPA and state agency officials on their roles, responsibilities, and authorities to monitor and respond to instances of perchlorate found. We interviewed officials from EPA headquarters and each of its 10 regions. We also interviewed officials from state environmental agencies in California, Oregon, Texas, and Utah. We selected these states because they (1) had higher estimated numbers of sites where perchlorate was found and higher perchlorate concentration levels and/or (2) had taken steps to investigate and respond to perchlorate. During interviews with state agency officials, we discussed whether parties responsible for perchlorate had taken action to clean up and whether federal or local governments required that they stop activities causing the release of perchlorate. Finally, we reviewed and analyzed data from federal and state agencies to determine the status and extent of cleanup efforts. To identify studies of the potential health risks from perchlorate, we conducted a literature search for studies of perchlorate health risks published since 1998. We also interviewed DOD and EPA officials to obtain a list of the studies they considered important in assessing perchlorate health risks. We examined the references for each study so that we could include any other key studies that we had not obtained through the literature search and DOD and EPA interviews. We identified 125 studies of perchlorate and the thyroid but did not review 35 of these studies because they were not directly related to the effects of perchlorate on the thyroid. Our review of 90 studies included the title; the author and publication information; the sponsor or recipient; a description of the study subjects; the type of research design and controls; and, where presented, the author’s conclusions or findings about the adverse effects of perchlorate on health. We conducted our work from June 2004 to March 2005 in accordance with generally accepted government auditing standards, including an assessment of data reliability and internal controls. Anniston Army Depot, Calhoun County Atmore Utility Board, Escambia County Daphne Utilities Board, Baldwin County Fort McClellan, City of Anniston Mobile County Water and Fire Protection Authority, Mobile County Montgomery Water Works, Montgomery County Redstone Army Arsenal (NASA Marshall Space Flight Center) Amount (ppb) Amount (ppb) Amount (ppb) Royal Palm Beach Utilities, Palm Beach County Sebring Water and Sewer System, Highlands County Three Worlds Camp Resort, Polk County City of Watkinsville, Oconee County Feagin Mill, Houston County Iowa Army Ammunition Plant, Middletown Chanute Air Force Base, Rantoul City of Joliet, Will and Kendall Counties City of Rock Island, Rock Island County Sangamo Electric Dump/Crab Orchard National Wildlife Refuge, City of Carterville (Department of the Interior) Amount (ppb) City of Hagerstown, Washington County Fort George Meade, City of Odenton Naval Surface Warfare Center, Indian Head White Oak Federal Research Center (Naval Surface Warfare Center) City of Deming, Luna County Fort Wingate Depot, Gallup Holloman Air Force Base, Alamogordo Kirtland Air Force Base, Albuquerque Los Alamos National Laboratory, City of Los Alamos (Department of Energy) Melrose Air Force Bombing Range, City of Clovis Mountain View Albuquerque, City of South Valley New Mexico American Water Company, City of Clovis Sandia National Labs, City of Albuquerque (Department of Energy) Amount (ppb) Amount (ppb) Amount (ppb) Maple Water Service Company, Bailey County McClain private well, Midland County McGregor Naval Weapons Industrial Reserve Plant, McLennan County McMurries private well, Martin County Minnix private well, Midland County Mobile Home Park, Lubbock County Nelms private well, Midland County Nobels private well, Midland County North State Highway 115, Winkler County North University Estates, Lubbock County North West Yoakum, Yoakum County O'Brien private well, Dawson County Offield private well, Midland County Pantex Plant, City of Amarillo, Carson County (Department of Energy) Amount (ppb) Purdue Farms, Incorporated, Accomack County Camp Bonneville, City of Vancouver City of Puyallup, Pierce County Firgrove Mutual, Incorporated, Pierce County Lacey Water Department, Thurston County Lakewood Water District, Pierce County Allegheny Ballistics Lab, City of Rocket Center Parts per billion (ppb). Author’s findings/ conclusions about the adverse effects of perchlorate on health New diagnostic Not identified/ criteria (see Unknown. original study for experimental controls) Established incidence probabilities (see original study for experimental controls) Adverse effects indicated. New diagnostic Adverse effects criteria for indicated. reviewing data (see original study for experimental controls) Revised Analysis Crofton of the Thyroid Hormone Data from the Mouse Immunotoxicology Study (from Keil et al., 1999) Nonadverse effects indicated. No information available on adverse effects. Nonadverse effects indicated. Adverse effects to development indicated. Keil, et al. Dose, duration, Nonadverse age, sex, and weight (dose levels independently verified) effects indicated. Information on adverse effects is incomplete. Keil, et al. Dose, duration, Nonadverse age, sex, strain, effects indicated. and weight No adverse (dose levels effects indicated. independently verified) ManTech Environmental Technology, Inc. Dose, duration, Nonadverse and sex, and weight adverse effects indicated. ManTech Environmental Technology, Inc. Nonadverse effects and adverse developmental effects indicated. Author’s findings/ conclusions about the adverse effects of perchlorate on health Nonadverse effects and adverse developmental effects indicated. Bekkedal, et Department of Animal al. Defense developmental effects indicated. Adverse effects to development indicated. Protection Agency/ Original study: Perchlorate Study Group development indicated. Dose, duration, Adverse effects to and age development indicated. Adverse effects to development indicated. Adverse effects to development indicated. Effects not studied. Dose, duration, Adverse effects to and sex development indicated. No adverse developmental effects indicated. Nonadverse and adverse effects indicated. Nonadverse and adverse effects indicated. Nonadverse effects indicated. No information available on adverse effects. Nonadverse effects indicated. No information available on adverse effects. Nonadverse effects indicated. No information adverse effects. Analysis of Dose- Marcus Response Functions for Effects of Perchlorate on Serum Hormone from Data of Greer et al. (2000, 2002) and Merrill (2001a) Nonadverse effects indicated. No information available on adverse effects. Nonadverse effects indicated. Potential adverse developmental effects indicated. Anion Selectivity by Van Sande, Ministere de Cells the Sodium Iodide et al. Symporter la Politique Scientifique, and Fonds effects indicated. No information available on adverse effects. Not identified/ Not identified/ Review/ Unknown Unknown effects indicated. No information available on adverse effects. Effects not studied. Nonadverse effects indicated. No information available on adverse developmental effects. Effects not studied (model developed). Effects not studied (model developed). Effects not studied. Effects not studied (model developed). Effects not studied (model developed). Carr, et al. Nonadverse effects indicated. No adverse developmental effects indicated. Carr, et al. Adverse developmental effects indicated. Carr, et al. Adverse developmental effects indicated. Carr, et al. Adverse developmental effects indicated. Effects not studied (reference dose developed). Sharma, et Perchlorate al. Dose, repeated No adverse study, bacteria strain, and sex effects indicated. Dose, duration, Nonadverse and switched exposed/control No information litters with exposed/ control dams effects indicated. available on adverse developmental effects. Dose, duration, Nonadverse and switched exposed/ control litters with exposed/ control dams effects indicated. No information available on adverse developmental effects. Not identified/ Unknown. Nonadverse and adverse effects indicated. Findings not used--design limitations too great. No adverse effects indicated. Tsui, et al. Effects not studied. No nonadverse or adverse effects indicated. Author’s findings/ conclusions about the adverse effects of perchlorate on health Nonadverse effects indicated. No information available on adverse developmental effects. Nonadverse effects indicated. No information available on adverse developmental effects. No adverse effects indicated. York, et al. Nonadverse effects indicated. Adverse effects to development indicated. Author’s findings/ conclusions about the adverse effects of perchlorate on health Adverse effects indicated. Clewell, et al. Nonadverse effects indicated. No information available on adverse effects. Duration, dose, Nonadverse and and recovery adverse effects period indicated. York, et al. Nonadverse effects indicated. Adverse developmental effects indicated. York, et al. Nonadverse effects indicated. No adverse developmental effects indicated. York, et al. Nonadverse effects indicated. No adverse developmental effects indicated. No adverse developmental effects indicated. No adverse effects indicated. Effects not studied. Nonadverse effects indicated. No adverse effects indicated. Not identified/ No effects Unknown indicated. Baseline tests Nonadverse performed to ensure subjects No information had no prior available of thyroid adverse effects. problems effects indicated. Not identified/ Unknown. Clewell, et al. Dose, duration, Effects not and model controls studied (model developed). Clewell, et al. Dose, duration, Effects not and model controls studied (model developed). Dose, duration, Effects not and model controls studied (model developed). Dose, duration, Effects not and model controls studied (model developed). Yu, et al. Dose, duration, Nonadverse and model controls effects indicated. No information available on adverse effects. Effects not studied (reference dose developed). In Vitro Mammalian San and Cell Gene Mutation Clarke Test (L5178Y/TK Mouse Lymphoma Assay) No adverse effect indicated. Adverse effect on development indicated. Consultative Letter: Channel Kinetic Data for Iodide Uptake Inhibition in the Thyroid by Perchlorate (2- Week Drinking Water Study) No effects indicated. Effects not studied (model developed). Nonadverese effects indicated. Adverse effects to development indicated. Effects not studied (bechmark dose developed). Incidence rates, No adverse age, sex, race/ethnicity, population size, and demographic features effects indicated. Effects not studied. Adverse effects to development indicated. Dose and breeding pairs (analysis with paired groups and individual pups) Nonadverse effects indicated. Adverse effects to development indicated. Iodide Transport in Harrison, et Howard Xenopus Laevis al. Effects not studied. Nonadverse effects indicated. No information available on adverse developmental effects. The Resource Conservation and Recovery Act (RCRA) was enacted as an amendment to the Solid Waste Disposal Act to create a framework for the management of hazardous and nonhazardous solid waste. It authorizes EPA to control hazardous waste from the point where waste is generated through its transportation, treatment, storage, and disposal. EPA regulations define hazardous waste to include waste specifically listed in the regulation as well as those defined as “characteristic waste.” Characteristic hazardous waste is defined as waste that is ignitable, corrosive, reactive, or toxic. A federal district court in California ruled, in part, that perchlorate is a hazardous waste under RCRA because it is ignitable, under certain conditions. RCRA requires owners and operators of facilities that treat, store, and dispose of hazardous waste, including federal agencies, to obtain permits specifying how they will safely manage waste. Under RCRA’s corrective action provisions, facilities seeking or holding RCRA permits can be required to clean up their hazardous waste contamination. Under RCRA, EPA has the authority to order a cleanup of hazardous waste when there is an imminent and substantial endangerment to public health or the environment. EPA may authorize states to administer their own programs in lieu of the federal program, as long as these programs are equivalent to and consistent with the federal program and provide for adequate enforcement. Under RCRA, state agencies have required RCRA permit holders to sample for and report on perchlorate detections and prevent additional releases. The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), commonly known as Superfund, governs the cleanup of releases or threatened releases of hazardous substances, pollutants, or contaminants. CERCLA’s definition of a hazardous substance includes substances regulated under various other environmental laws, including RCRA, the Clean Air Act, the Clean Water Act, and the Toxic Substances Control Act. Under section 120 of CERCLA, the federal government is subject to and must comply with CERCLA’s requirements to the same extent as any nongovernmental entity. CERCLA provides broad authority to EPA to respond to releases or threatened releases of hazardous substances or pollutants or contaminants that may endanger public health or the environment. Under these provisions, DOD has responded to perchlorate found on military installations and facilities. CERCLA establishes prohibitions and requirements for contaminated sites; provides for the liability for hazardous substances at these sites; and provides for the use of the Hazardous Substances Superfund, a trust fund to provide for cleanup, for example, when a responsible party cannot be identified. The law authorizes short-term removal—where actions may be taken to address releases or threatened releases requiring prompt response—and long-term response—where actions may be taken to permanently reduce the danger associated with a release. EPA identifies the most hazardous sites, those requiring long-term action, by listing them on the National Priorities List. The Clean Water Act authorizes EPA to regulate the discharge of pollutants into waters of the United States. EPA may authorize states to carry out a state program in lieu of the federal program if the state program meets the requirements of the Clean Water Act, including providing for adequate enforcement. The act defines a pollutant to include virtually all waste material. The act provides for the establishment of national discharge limitations, water quality standards, and a permit program and has provisions for addressing oil and toxic substance spills. Covered private parties as well as federal facilities must comply with the requirements of the act. According to EPA, since pollutants are defined broadly in the act to include most waste material, perchlorate would likely fall within this definition. Under the Clean Water Act’s National Pollution Discharge Elimination System (NPDES) program, facilities discharging pollutants into waters of the United States from point sources are required to obtain an NPDES permit from EPA or authorized states. NPDES permits include specific limits on the quantity of pollutants that may be discharged and require monitoring of those discharges to ensure compliance. Industrial, municipal, and other facilities must obtain permits to discharge specific pollutants if their discharges go directly to waters of the United States. Sites with NPDES permits are required to routinely sample and report to state regulatory agencies on the release of specified pollutants, which may include contaminants such as perchlorate. Under section 107 of the Federal Facility Compliance Act of 1992, EPA was required, in consultation with DOD and the states, to issue a rule identifying when military munitions become hazardous waste under RCRA and to provide for protective storage and transportation of that waste. Under the rule issued by EPA, used or fired military munitions become waste subject to RCRA regulation if, among other things, (1) they are transported off-range for waste management purposes or (2) they or their constituents are recovered, collected, and then disposed of by burial on or off a range. Unexploded, used, and fired military munitions are known sources of perchlorate. Under RCRA, as amended by the Federal Facility Compliance Act, EPA maintains that DOD installations may be required to sample and monitor off-range for perchlorate as well as other contaminants associated with military munitions where EPA has evidence that the contaminants are creating an imminent and substantial endangerment to health or the environment. The Safe Drinking Water Act authorizes EPA to issue national primary drinking water regulations setting maximum contaminant-level goals and maximum contaminant levels for drinking water that must be met by public water systems. EPA may authorize states to carry out primary enforcement authority for implementing the Safe Drinking Water Act if, among other things, the states adopt drinking water regulations that are no less stringent than the national primary drinking water regulations. EPA has set standards for approximately 90 contaminants in drinking water; however, most of the more than 200 chemical contaminants associated with munitions use, including perchlorate, are currently unregulated under the Safe Drinking Water Act. The 1996 amendments to the Safe Drinking Water Act required EPA to (1) establish criteria for a monitoring program for unregulated contaminants, where a maximum contamination level has not been established, and (2) publish a list of contaminants chosen from those not currently monitored by public water systems. EPA’s regulation, referred to as the Unregulated Contaminant Monitoring Regulation, was issued in 1999 and supplemented in 2000 and 2001. The purpose of the regulation was to determine whether a contaminant occurs at a frequency and in concentrations that warrant further analysis and research on its potential effects, and to possibly establish future drinking water regulations. The first step in the most recent program required public water systems serving more than 10,000 customers—and a sample of 800 small public water systems serving 10,000 or fewer customers—to monitor drinking water for perchlorate and 11 other unregulated contaminants over a consecutive 12-month period during 2001 and 2003 and to report the results to EPA. According to EPA, large public water systems provide drinking water to about 80 percent of the U.S. population served by public water systems. In addition to the individuals named above, John Delicath, Christine Frye, Alan Kasdan, Karen Keegan, Roderick Moore, Edith Ngwa, James Rose, and Rebecca Shea made key contributions to this report.
Perchlorate, a primary ingredient in propellant, has been used for decades in the manufacture and firing of rockets and missiles. Other uses include fireworks, flares, and explosives. Perchlorate has been found in drinking water, groundwater, surface water, and soil in the United States. The National Academy of Sciences (NAS) reviewed studies of perchlorate's health effects and reported in January 2005 that certain levels of exposure may not adversely affect healthy adults but recommended more studies be conducted on the effects of perchlorate exposure in children and pregnant women. GAO determined (1) the estimated extent of perchlorate in the United States, (2) what actions have been taken to address perchlorate, and (3) what studies of perchlorate's health risks have reported. Perchlorate contamination has been found in water and soil at almost 400 sites in the United States where concentration levels ranged from a minimum reporting level of 4 parts per billion to millions of parts per billion. More than one-half of all sites were in California and Texas, and sites in Arkansas, California, Texas, Nevada, and Utah had some of the highest concentration levels. Yet, most sites had lower levels of contamination; roughly two-thirds of sites had concentration levels at or below the Environmental Protection Agency's (EPA) provisional cleanup standard of 18 parts per billion. Federal and state agencies are not required to routinely report perchlorate findings to EPA, and EPA does not centrally track or monitor perchlorate detections or the status of cleanup. As a result, a greater number of contaminated sites than we reported may already exist. Although there is no specific federal requirement to clean up perchlorate, EPA and state agencies have used broad authorities under various environmental laws and regulations, as well as state laws and action levels, to sample and clean up and/or require the sampling and cleanup of perchlorate by responsible parties. Further, under certain federal and state environmental laws, private industry may be required to sample for contaminants, such as perchlorate. According to EPA and state officials, private industry and public water suppliers have generally complied with regulations requiring sampling and agency requests to sample. The Department of Defense (DOD) has sampled and cleaned up perchlorate in some locations when required by laws and regulations, but the department has been reluctant to sample on or near active installations under other circumstances. Except where there is a specific legal requirement, DOD's perchlorate sampling policy requires the services to sample only under certain conditions. Cleanup is planned or under way at 51 of the almost 400 perchlorate-contaminated sites identified to date. Since 1998, EPA and DOD have sponsored a number of perchlorate health risk studies using varying study methodologies. We reviewed 90 of these studies that generally examined whether and how perchlorate affected the thyroid. About one-quarter concluded that perchlorate had an adverse effect. In January 2005, NAS reported on the potential health effects of perchlorate and concluded that a total exposure level from all sources, higher than that initially recommended by EPA (a dose equivalent to 1 part per billion in drinking water, assuming that all exposure came from drinking water) may not adversely affect a healthy adult. On the basis of NAS' report, EPA revised its reference dose to a level that is equivalent to 24.5 parts per billion in drinking water (if it is assumed that all exposure comes only from drinking water). The reference dose is not a drinking water standard; it is a scientific estimate of the total daily exposure level from all sources that is not expected to cause adverse effects in humans, including the most sensitive populations.
Since the early 1900s, female life expectancy has exceeded male life expectancy, resulting in women outnumbering men in the older age groups. Although gender differences in life expectancy have been decreasing, women age 65 and over continue to outnumber men age 65 and over. This trend is projected to continue over the next 4 decades. Further, the population age 65 and over is expected to more than double from 2010 to 2050. The population of women among the “oldest-old”— those 85 and over—is also projected to grow. Today, of those age 65 and over, one-sixth of women and one-tenth of men are among the oldest-old and this is projected to grow to almost one-quarter of women and one-fifth of all men by 2050. Women’s workforce participation surged over the last half of the 20th century. Among women ages 25 to 54, the rate of labor force participation jumped from 42 percent by the end of the 1950s to about 74 percent by the late 1980s. The rate continued to grow in the 1990s but at a slower pace. Over the last decade, the rate declined slightly from its peak of 76.8 percent in 1999, and was 74.7 percent in 2011. Labor force participation rates have varied by generation, with women born in the baby boom generation much more likely to be in the workforce than preceding generations. have increased significantly for women ages 55 to 64 (see fig. 1). The baby boom generation consists of individuals born from 1946 to 1964. Although the composition of retirement income—the proportion of income coming from different sources—varies greatly for individual households, Social Security benefits, pension income, and earnings make up the bulk of income for the U.S. population age 65 and over. Social Security provides retirement benefits to eligible workers, based on their work and earnings history. Social Security also provides benefits to eligible workers who become disabled before reaching retirement age, as well as spouses, widow(er)s, and children of eligible workers. Although all Social Security benefits are based upon a common formula, they are calculated in different ways for each beneficiary type. The level of the monthly benefit is adjusted for inflation and varies depending on the age at which the beneficiary chooses to begin receiving benefits. Generally, beneficiaries may begin receiving retirement benefits at age 62; however, the payments will be higher if they wait to receive benefits at their full retirement age, which varies from 65 to 67, depending on the beneficiary’s birth year. The monthly retirement benefit continues to rise for workers who delay benefits beyond their full retirement age, up to age 70. Employees and employers pay payroll taxes that finance Social Security benefits. However, Social Security faces a long-term financing shortfall resulting largely from lower birth rates and longer life spans. According to the Social Security Trustees, the Social Security Trust Funds could be exhausted by 2033 and unable to pay full benefits. Pension income from employer-provided retirement plans falls into two broad categories: DB and DC pension plans. DB plans typically provide retirement benefits to each retiree in the form of an annuity that provides a monthly payment for life, the value of which is typically determined by a formula based on particular factors specified by the plan, such as salary or years of service. Under DC plans, workers and employers may make contributions into individual accounts. Workers can also save for retirement through an individual retirement account (IRA). IRAs allow workers to receive favorable tax treatment for making contributions to an individual account. At retirement, participants’ distribution options vary depending on the type of pension plan. Private sector DB plans must offer participants a benefit in the form of a lifetime annuity (either immediately or deferred). An annuity can help to protect a retiree against risks, including the risk of outliving one’s assets (longevity risk) and, when an inflation-adjusted annuity is provided, the risk of inflation diminishing one’s purchasing power. Some DB plans also give participants a choice to take a lump sum cash settlement (distribution) or roll over funds to an IRA, instead of taking a lifetime annuity. In contrast, DC plan sponsors are not required to offer a lifetime annuity and more often provide participants with a lump sum distribution as the only option. Other options for DC participants may include leaving money in the plan, taking a partial distribution, rolling their plan savings into an IRA, or purchasing an annuity, which are typically only available outside of the plan. In addition, whether a pension plan is a DB or DC has implications for whether a spouse is entitled to the pension’s benefits. The Employee Retirement Income Security Act of 1974 (ERISA) requires that DB plans include a survivor’s benefit, called a qualified joint and survivor annuity. Thus, after a worker with a DB plan dies, the surviving spouse continues to receive an annuity, but typically at a reduced level. A qualified joint and survivor annuity may only be waived through a written spousal consent. Under most DC plans, the plan is written so that the employee may, during his or her lifetime, make withdrawals from the account or roll over the balance into an IRA without spousal consent, provided that the employee’s vested account balance is payable in full on death to the surviving spouse. Over the past quarter-century, the percentage of private sector workers participating in employer-sponsored pension plans has held steady at about 50 percent. Although some workers choose not to participate in an employer-sponsored pension plan, the large majority of nonparticipating workers do not have access to one. In addition, over the last 3 decades, the U.S. retirement system has undergone a major transition from one based primarily on DB plans to one based on DC plans, increasing workers’ exposure to economic volatility and usually shifting the burden of saving to the individual worker, which makes them more reliant on their own decision making. As we have previously reported, from 1990 to 2008, the number of active participants in private sector DB plans fell by 28 percent, from about 26 million to about 19 million. Over the same period, the number of active participants in DC plans increased by 90 percent, from about 35 million to about 67 million. DC plans generally do not offer annuities, so retirees are left with increasingly important decisions about managing their retirement savings to ensure they have income throughout retirement. These decisions may be more difficult to make in times of economic volatility. For example, two recent recessions—one beginning in March 2001 and ending in November 2001 and the other beginning in December 2007 and ending in June 2009— resulted in major stock indices falling dramatically. The long-term effects of financial market fluctuations on retirement income security are uncertain, but the effects may vary based on factors such as age, type of pension plan, and employment status. Employment status, in particular, can pose serious challenges for retirement security. As we recently reported, long-term unemployment can reduce an older worker’s future monthly retirement income in numerous ways such as by reducing the number of years the worker can accumulate DB plan retirement benefits or DC plan savings, by motivating workers to claim Social Security at an earlier age, and by leading workers to draw down retirement savings to pay for expenses during unemployment. From 1998 to 2009, working women surpassed men in their likelihood of having an employer that offered a pension plan, but were slightly less likely to be eligible for and to participate in those plans. However, this gap, narrowed over time. In fact, by 2009, the same proportion of working women and men ultimately participated in some type of plan (either a DB or a DC) as shown in figure 2. Nonetheless, women’s contribution rates to DC plans remained lower than those of men. While working men and women were just as likely to have employers that offered pension plans in 1998, by 2009, these women were more likely than men to work for employers that offered pension plans (see fig. 3). This may be due to the sectors and industries in which women worked. For example, a greater proportion of women than men worked in the public and nonprofit sectors—sectors that have higher proportions of workers with access to plans offered by employers—than the for-profit sector. Women were also more likely to work in the education and health industries—industries that have higher proportions of workers with access to plans offered by employers. In contrast, men had higher rates of self- employment over this period, and self-employed individuals were much less likely to have retirement plans. In addition, from 1998 to 2009, the proportion of working women and men with employers that offered pension plans declined after 2003, possibly reflecting the decline in the number of employers offering DB plans.women working for employers offering DC plans increased, rising from 41 to 49 percent (see fig. 3). With the exception of 1998, women were more likely to work for employers that offered DC plans than were men. The composition of women’s and men’s retirement income did not vary greatly over the last decade despite changes in the economy and pension system, largely because their main income sources—Social Security and DB plans—were shielded from fluctuations in the financial market. However, women, especially widows and those 80 years and over, depended on Social Security benefits for a larger percentage of their income than men. In contrast, women received a lower share of their income from earnings than men. Women age 65 and over also had less retirement income on average and higher rates of poverty than men in that age group. Specifically, for the population age 65 and over, women’s median income was approximately 25 percent lower than men in the same age group for all years. Moreover, women in this age group were nearly twice as likely to be living in poverty than men. The composition of household income for women and men age 65 and over fluctuated only slightly from 1998 to 2010, despite changes in the economy and the pension system (see fig. 8). The composition of household income did not fluctuate drastically largely because Social Security and DB benefits comprised nearly three-quarters of household income for women and slightly less (around 70 percent) for men, providing them with guaranteed monthly income for life. Women tended to receive a higher proportion of household income from Social Security. In fact, in 2010, 16 percent of women age 65 and over depended solely on Social Security for income compared to 12 percent of men. At the same time, the share of income from earnings increased slightly for men and women, but was consistently lower for women than for men. Furthermore, the share of income from DC plans was very low (1 to 2 percent) across the entire period for both men and women. This is due to the fact that the lion’s share of people age 65 and over did not report receiving any income from regular distributions from DC plans. As shown in figures 9 to 11, in 2010, the composition of household income for individuals age 65 and over also varied by demographic group. Among marital-status categories, widowed women depended on Social Security benefits for a larger percentage of their income (58 percent) than other women (see fig. 9). In fact, about 21 percent of all widowed women depended on Social Security as their sole source of income. Separated women and men received higher shares of income from earnings, and married women and men received relatively higher shares of their income from DB plans. As shown in figure 10, among different age groups, women age 80 and over received the highest share of their income from Social Security (61 percent). In fact, about 20 percent of them depended on Social Security for their sole source of income. Men in the youngest age category (65 to 69) received a higher share of their income from earnings (31 percent) relative to other groups, while individuals in the oldest age categories received the smallest share of income from earnings, likely reflecting the declining ability to work at older ages. Finally, among racial and ethnic groups, White and Black women and men age 65 and over received the highest share of income from Social Security (see fig. 11). In contrast, Asians and Hispanics tended to receive a lower share of their incomes from Social Security. Asian men and women received a disproportionately higher share of income from earnings relative to other racial and ethnic categories. White and Black women and men received higher shares of income from DB plans, compared to Hispanics and Asians. Women age 65 and over had consistently lower median incomes than men across age and most race groups over time. Over the last decade, the median incomes of women age 65 and over were approximately 25 percent lower than their male counterparts. Median incomes, did, however, vary by demographic category (see fig. 12). Demographic groups with the lowest median incomes included women who were either unmarried—especially those who had been separated or never married— over the age-of 80, or Black or Hispanic. In addition, a greater proportion of women age 65 and over lived in households with incomes below the poverty line than men in the same age group. Consistent with their relatively lower median incomes, the demographic groups with the highest poverty rates were women who were not married, over the age of 80, or non-White (see fig. 13).contrast, married people and White men had the lowest poverty rates. When women nearing or in retirement—women over age 50—became divorced, widowed or unemployed, the effects on their households’ total assets and income were detrimental, according to our analysis (see table 1). Further, divorce and widowhood had more pronounced effects for women than for men. These effects may be contributing to elderly women’s higher poverty rates and lower levels of income compared to men’s. We also found, not surprisingly, that a decline in health after age 50 had a negative effect on household assets and income. Lastly, we also examined the effect of caring for elderly parents on income and assets, but we did not find statistically significant negative relationships. All of these effects may not be generalizable to younger cohorts as women’s labor force participation and, correspondingly, their assets and income, have changed over the last several decades. As shown in figure 14, the effects of divorce or separation after age 50 had substantial, negative effects on women’s total household assets and income. For both women and men, assets fell by about 40 percent with a divorce or separation. The effects were less substantial for those living in households where at least one member was age 65 or over, but these women and men still lost about one-third of their total assets. The effects for income were more pronounced for women than for men. Women’s income fell by 41 percent, nearly twice that of men’s (23 percent). The effects were largest for women living in households where all members were age 64 or younger; for these women, income fell by 44 percent. However, while divorce had very detrimental effects, we found that, for women ages 51 and over, divorce or separation was less prevalent than widowhood. Specifically, for those age 85 and over in our sample, 4 percent of women and 2 percent of men had been divorced or separated. Not only did women’s total household assets and income decline substantially with widowhood, but the effects were more pronounced for women than for men (see fig. 15). For example, while men’s income fell 22 percent after widowerhood, women’s income fell by an even greater amount—37 percent. The effects were larger for women living in younger households than women living in older households. Specifically, women in households where all members were age 64 or younger experienced a 31 percent decrease in assets and a 47 percent decrease in income.Adding to these effects, widowhood was a much more common experience for women than men in our sample. In fact, women were at least twice as likely as men to become widowed between any two survey periods. Consequently, 70 percent of women age 85 and over were widowed compared to only 24 percent of men age 85 and over. Similar to becoming widowed, unemployment had negative effects on total household assets and income, although the effects were similar for women and men (see fig. 16).income decline by about 7 to 9 percent. The effects on income were most acute for households where at least one member of the household was age 65 or over. For these households, men’s assets fell by 14 percent Women and men saw their assets and and their income fell by 12 percent. For women, there was not a significant decline in assets but their income fell by 13 percent. In addition, older workers may have difficulty finding another job. However, unemployment was not very prevalent in the HRS sample, in part because many survey respondents were retired. On average, only 1 percent of men and women reported being out of work and actively looking for a job. For men and women ages 51 to 64, this percentage rose slightly to 2 percent. As shown in figure 17, a decline in self-reported health status also had negative effects on total household income and assets, although to a lesser degree than widowhood, divorce, and unemployment. For all households in our sample, income fell by 4 percent for women and 3 percent for men when self-reported health status changed from excellent, very good or good to fair or poor. The effects of a decline in health on assets varied by household type. The differences between women and men were the largest for younger households, where all members were age 64 or younger. For example, the loss of assets was greater for men (13 percent) compared to women (5 percent). Although the effects of a decline in health were smaller than the effects of some of the other life events in our analysis, more individuals experienced this event than any other. Almost 30 percent of individuals ages 65 to 84 reported being in poor health (see table 2). For individuals ages 85 and over, 40 percent reported being in poor health. Interestingly, as shown in table 2, women and men suffered from poor health at similar rates across age categories. Further, we found that, between any two HRS surveys, about 2 percent of both women and men reported entering a period of poor health. Lastly, we found that providing elderly parents with financial assistance or helping parents with basic activities of daily living (i.e., bathing, dressing, and eating) had a slightly positive effect on household assets and income. However, often these effects were not significantly different from zero, possibly because of limitations in our data and methods. In addition, we found that only a small percentage of the sample provided these types of assistance to their parents. Also, women and men age 51 through 64 were much more likely to provide assistance than women and men age 65 and over. But, as the baby boomers age, more children may be called upon to help their parents financially or with basic activities. Through our interviews with experts and our literature review, we found that a range of existing policy options could help improve retirement income security for women. Our analysis focuses on how women would be affected by these policy options. While each of these options would be available for both women and men, they could help address some of the specific challenges women face in ensuring a secure retirement. For example, some options would expand the use of existing tax incentives, encouraging women to save more. Another set of options would expand access to and strengthen spousal protections for retirement savings. These options could increase women’s retirement savings and preserve their retirement income if they become divorced or widowed. Other sets of options could motivate women nearing retirement to work longer and save more, ensure lifetime retirement income, or enhance benefit adequacy. These options could help shield women from the effects of divorce, widowhood, and unemployment and decrease their risk of living in poverty. All of the options have cost implications that would need to be considered prior to implementation. Moreover, as with federal spending programs, any option that results in reduced or deferred federal tax revenue may require an offset, such as raising revenue elsewhere or cutting spending. While the federal government could bear some of these costs, workers and plan sponsors could be responsible for others. Also, although some of the options could have positive effects on women on their own, there could be an offsetting effect. If the plan sponsor, for example, is responsible for the increased cost of sponsorship and makes changes to the plan to offset those increased costs, women may not ultimately benefit from the policy option. Lastly, some of these changes may require legislative changes. Some of the policy options we identified could expand the use of existing tax incentives for individuals to save for retirement during their working years (see table 3). These options could help lower- and moderate- income workers, as well as workers who take time out of the workforce to care for family members. Since women have lower earnings than men, on average, and are more likely to take time out of the workforce to care for family members, women may especially benefit from these options. However, pension experts are concerned that women may not be as financially literate as men, hindering them from taking full advantage of options for saving for retirement. Experts also identified a set of policy options that would offer new opportunities to accumulate earnings credits for Social Security (see table 4). These options could enhance the retirement security of workers who experience a period of unemployment or who take time out of the workforce to care for family members. For example, counting unemployment insurance payments as creditable earnings under Social Security may be particularly helpful for women who become unemployed later in life and experience a notable decrease in their assets and income. However, because they would extend eligibility or increase benefits, these options would increase costs for Social Security and decrease solvency. Other policy options could either expand access to retirement savings in DC plans and IRAs or strengthen spousal protections for retirement savings (see table 5). These options could address a variety of challenges women face, including their lower levels of income in retirement. In addition, they could preserve retirement income after a divorce or after becoming widowed. For example, requiring that a wife provides consent whenever a husband takes a distribution from his DC savings would protect the wife’s access to household income in retirement. However, these options could increase costs for plan sponsors. For example, requiring notarized spousal consent whenever a husband takes a distribution could increase the administrative costs that must be paid by plan sponsors. Experts identified three policy options that could motivate women nearing retirement to remain in the workforce and delay claiming Social Security benefits, thereby giving them more time to save for retirement and increasing their Social Security benefits (see table 6). Because women tend to have less income in retirement than men, and because elderly women face higher poverty rates than elderly men, these options for boosting retirement savings and benefits may improve women’s overall retirement income security. For example, the full retirement age for Social Security could be increased, thus providing workers who are able to work with an incentive to keep doing so—potentially saving more for retirement in the process. However, each of these options has disadvantages. In the case of increasing the full retirement age, this option may not prove to be effective because women may not be able to work longer or may choose to exit the workforce before the full retirement age. They would, in turn, suffer reductions in Social Security income. Experts also identified several policies that would ensure lifetime retirement income for women (see table 7). Women may especially benefit from these options, given that they (1) have lower levels of retirement income than men, (2) are more likely to live longer, and (3) are also more likely to become widowed. For example, Treasury recently proposed modifying the required minimum distribution rules so that individuals could use part of their retirement savings to purchase a longevity annuity. This option would provide older women with guaranteed additional income, which may be helpful if they live long lives or outlive a spouse. These options, however, often have cost implications for either federal tax revenue or plan sponsors. For example, if individuals purchased longevity annuities using tax-qualified retirement savings, the tax revenue generated from withdrawing these savings would be deferred until the annuity started paying out. There are also a number of policy options that could enhance Social Security benefits for vulnerable groups at risk of not having sufficient income or assets in retirement, including widows, divorced women, low- income women and women age 85 and over (see table 8).increasing the Social Security Survivor’s benefit to 75 percent of the deceased worker’s benefit would provide widows with more monthly income, helping to keep some women out of poverty. However, all of these options would increase existing costs or introduce new costs and, in turn, would decrease the solvency of the system. To retirement security experts, our findings paint a familiar if disconcerting picture. Although increases in women’s labor force and retirement plan participation have led to a marginal improvement in women’s prospects for achieving a more secure retirement, our report also highlights the substantial risks women continue to face in accumulating adequate retirement income. Yet, despite the differential risks women face, retirement security in America continues to be a national dilemma that transcends gender differences. It is important to note that much of the relative improvement in women’s retirement security has been a consequence of deterioration in men’s retirement security. Recent economic volatility, coupled with the continued shift toward defined contribution plans, exposes all workers to more financial risk than previous generations. Further, older workers’ financial security is increasingly dependent on individual choices regarding how much to save, how to invest those savings, at what age to retire, and how to make those savings last throughout retirement. Much of the total workforce continues to approach retirement age with no traditional pension. Unchecked, this problem will only grow in severity. Nevertheless, women face a unique set of circumstances, which warrant special attention. In particular, our findings show that the disruptions that occur as a result of later-in-life events, such as divorce and widowhood, can be financially devastating for women. In addition, women’s greater likelihood of being single, higher life expectancy, and lower average earnings continue to make saving for retirement and avoiding late-life poverty a challenge. The challenges facing women’s retirement income security do not lack for potential resolutions. In fact, our discussions with experts identified a number of policy options that would improve retirement income security for women. These options range from changes to Social Security to altering the private pension system. While these options involve tradeoffs and difficult choices, they have the potential to improve the retirement income security of men as well. Ultimately, such efforts provide opportunities to improve the retirement security of many Americans. We provided a draft of this report to the Department of Labor, the Department of the Treasury, and the Social Security Administration for review and comment. While none of the agencies provided official comments, each provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, the Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix II. To analyze factors that affect women’s retirement security, we examined (1) how women’s access to and participation in employer-sponsored retirement plans compare to men’s and how they have changed over time; (2) how women’s retirement income compares to men’s and how the composition of their income has changed with economic conditions and trends in pension design; (3) how events occurring later in life affect women’s retirement income; and (4) what policy options are available to help increase women’s retirement income security. This appendix provides a detailed account of the information and methods we used to answer these questions. Section 1 describes the key information sources we used. Sections 2 through 4 describe the empirical methods we used to answer questions 1 through 3 respectively and the results of supplementary analyses. To answer our questions, we obtained information from a variety of sources including two nationally representative surveys—the Survey of Income and Program Participation (SIPP) and the Health and Retirement Study (HRS)—the academic literature on retirement security, and a range of experts in the area of women’s retirement security. Table 9 summarizes the data sources used to answer each question. This section provides a description of our data sources and the steps we took to ensure their reliability. To answer Questions 1 and 2, we analyzed data collected through the SIPP, a nationally representative survey conducted by the U.S. Census Bureau that collects detailed information on income sources and pension plan coverage, among many other areas. The survey is conducted in a series of national panels, with sample sizes ranging from approximately 14,000 to 36,700 interviewed households. The duration of each panel ranges from 2 ½ years to 4 years. Within each panel, the data are collected in a series of “waves” which take place in 4-month cycles. Within each wave, Census administers a core survey consisting of questions that are asked at every interview, and several modules relating to a particular topic. We used data from the core survey and the topical module on retirement and pension coverage from the last four SIPP panels, which began in 1996, 2001, 2004, and 2008 respectively. For all but the 2008 panel, the topical module on retirement and pension coverage was administered in Wave 7. For objective 1, we matched core data from Wave 3 of the 2008 panel with the topical module data, which was also administered in Wave 3. This ensured that the demographic data used in the analysis for that objective would match the time frame of the topical module data. However, to obtain the most up to date income data for objective 2, we used core data from Wave 7, which was the most recently available data as of October 2011. Table 10 shows the waves and questionnaires we used to answer each objective. It also shows the years that the data were collected during each panel and wave listed. The bolded years correspond to the years of data that are presented in the figures in objectives 1 and 2. In comparison to other nationally representative surveys, the SIPP had several main advantages. First, the SIPP collects separate information on defined benefit (DB) and defined contribution (DC) plans. Other surveys, such as the Current Population Survey, do not distinguish between income from and participation in DB and DC plans. Second, the SIPP sample is larger than comparable surveys, such as the Survey of Consumer Finances (SCF). Consequently, it is possible to produce point estimates for demographic subcategories with a higher degree of reliability. Further, in comparison to the SCF, which oversamples wealthy households, the SIPP oversamples lower-income households—arguably an important component of an analysis of income security. Despite its advantages, the SIPP has two limitations for our analysis. First, as with most survey data, SIPP data are self-reported. This can be problematic for the reporting of data on income sources and pension plan participation. For example, respondents might incorrectly report that they participate in a pension plan when they do not participate in one. Second, despite the fact that SIPP differentiates between participation in a DB or DC plan, it does not contain full information on whether an individual’s employer offers a DB plan. To answer question 3—on the effects of events occurring later in life on women’s retirement income security—we analyzed data collected through the HRS, a nationally representative survey primarily sponsored by the National Institute of Aging and conducted by the Institute for Social Research at the University of Michigan. This longitudinal survey collects data on individuals over age 50 and contains detailed information on health, marital status, assets, income, and care for elders. Respondents were first surveyed in 1992, when they were age 51 to 61 and continued to be surveyed every 2 years. Additional cohorts were added in later years to maintain the representation of the older population. Table 11 presents the cohorts that are included in the HRS sample. Respondents are resurveyed every 2 years. The data in our analysis span from the initial 1992 survey through the early release data for 2010, the most current data available. Our analysis follows over 30,000 individuals from the HRS sample. One of the main advantages of the HRS is that the same households are interviewed at different points of time, allowing us to examine the correlation of changes in life events to changes in household assets and income. Further, RAND, a research organization, cleans and processes the HRS data to create a user-friendly longitudinal dataset that has consistent and intuitive naming conventions, model-based imputations for missing wealth and income data, and spousal counterparts of most individual-level variables. We used these data for our analysis. However, there are three limitations for our analysis. First, the women currently in the HRS survey may have very different retirement experiences from women in the workforce today due to changes in demographic trends and workforce participation. Second, as with the SIPP, data from the HRS are self-reported. Third, total household assets cannot be broken out at the individual level. For each of the datasets described above, we conducted a data reliability assessment of selected variables by conducting electronic data tests for completeness and accuracy, reviewing documentation on the dataset, or interviewing knowledgeable officials about how the data are collected and maintained and their appropriate uses. When we learned that particular fields were not sufficiently reliable, we did not use them in our analysis. For example, we chose not to use data from the SIPP Topical Module on Annual Income and Retirement Accounts because many of the fields in that survey are not edited by the Census Bureau. For the purposes of our analysis, we found the variables that we ultimately reported on to be sufficiently reliable. To gain an understanding of the challenges women face in attaining a secure retirement and policy options that could enhance women’s retirement security, we conducted an extensive literature review and interviewed a range of experts. To identify existing studies, we conducted searches of various databases, such as EconLit, Electronic Collections Online, ProQuest, Academic OneFile, WorldCat, and Policy File. From these sources, we identified 128 articles that appeared in journals since 2007 and were relevant to our research objective on policy options that could enhance women’s retirement security. From the articles identified in the preliminary search, we reviewed article abstracts, when available, to determine which articles contained information germane to our report and reviewed those articles. In addition, we reviewed articles that were collected during the previous GAO study on women’s retirement security that contained information relevant to our empirical analyses, described below, and reviewed articles that were suggested to us by the experts we interviewed. We performed these searches and identified articles from May 2011 to October 2011. To supplement the literature review, we conducted interviews with experts. To ensure that we obtained a balanced perspective, we interviewed experts with a range of perspectives and from different types of organizations including government, academia, advocacy groups, and the private sector. We also consulted several experts in government and academia on technical issues related to our analysis. Specifically, we interviewed agency officials at the departments of the Treasury and Labor, the Social Security Administration, and the Bureau of the Census; academic experts at the Employee Benefits Research Institute, Heritage Foundation, University of Pennsylvania, Stanford University, Urban Institute, and Wellesley College; and industry experts and advocates from the American Council on Life Insurers, Anna Rappaport Consulting, Financial Engines, the Institute for Women’s Policy Research, the National Women’s Law Center, AARP, the Pension Rights Center, the National Academy of Social Insurance, Social Security Works, and the Women’s Institute for a Secure Retirement. To determine the proportion of men and women that (1) work for an employer that offers a plan, (2) are eligible for a plan, and (3) participate in a plan, we used data from the SIPP topical module on retirement and pension plan coverage. Specifically, we constructed five dummy variables using a combination of various questions in the SIPP. Table 12 shows the information we used to construct each variable. For each of these variables, we used SIPP individual-level weights to compute point estimates and, in conjunction with other factors, calculate the standard errors of those estimates so that we could accurately account for the complex survey design. We consulted statisticians from the U.S. Bureau of the Census on the appropriate use of these weights. To better understand the factors that might explain gender differences in each of these variables, we developed a series of empirical models. Following the literature, we controlled for the following factors in our models: (1) demographic characteristics including gender, age, marital status, children present in the household, single parenthood, race and ethnicity, citizenship, immigrant status, and education level; and (2) occupational characteristics including part-time employment status, self- employment status, years of tenure, work experience, occupation, industry, sector, union status, and size of employing firm.these models, we used logistic regression—an appropriate technique when the dependent variable is binary, or has two categories such as participating in a plan or not participating in a plan. Logistic regression also allows for the coefficients to be converted into odds ratios, which are described below. We conducted the modeling analyses in a series of steps whereby with each step, the sample of men and women that was included in the analysis was conditional on the previous step. Specifically, the first analysis involved analyzing the probability of working for an employer that offered a pension plan for all workers in the sample. The second analysis involved analyzing the probability of being eligible for a plan for those men and women that worked for an employer offering a plan. The third analysis involved analyzing the probability of participating in a plan for those that were eligible for their employer-sponsored plan. In conjunction with understanding the factors associated with each dependent variable in our models, it is essential to also understand how women and men differ in those factors. Taken together, the information from the model and information from a comparison of men’s and women’s characteristics enables us to understand what factors make women more or less likely to be employed by an employer that offers a plan, be eligible for the plan, and participate in the plan. For example, if we know that women are disproportionately more likely to work part-time and that part- time status is an important factor associated with plan participation, we can infer that women’s higher rates of part-time status might contribute to their lower rates of plan participation. Table 13 compares the characteristics of men and women for each of the factors that we control for, across each year of the study period. Generally, the characteristics of men and women in the working population did not change dramatically over the study period. Correspondingly, when we compare men and women in each year, several relationships between them were consistent across all of the study years. In terms of demographic characteristics, women were more likely than men to be widowed and divorced. Women were also more likely to have children present in the household, be single parents, and work part time. A higher proportion of men than women were Hispanic, and this proportion increased over the study period. In terms of occupational characteristics, several gender differences persisted across the study years. Women consistently had higher levels of education and were more likely to work in the public or nonprofit sectors. Men were more likely to work in the private sector, be self- employed, have longer tenure at their current position, have more work experience, and to be in a union. Although the occupational and industry categories in the SIPP data changed midway through the study periods, the distributions of men and women across occupations and industry were generally consistent for the last 2 study years. Specifically, the top three occupations for women were office and administrative support; sales and related services; and education, training, and library services, with 20, 10, and 10 percent of women working in these occupations respectively in 2009. Men tended not to be as concentrated in just a few occupations. In 2009, the highest proportions of men were employed in management (9 percent), sales and related occupations (8 percent), construction and extraction (8 percent), and transportation and material moving (8 percent). Similarly, in 2009, the top three industries for women were health care and social assistance (21 percent), educational services (14 percent), and retail trade (10 percent). For men in this year, the top three industries in which men were employed were manufacturing (13 percent), construction (9 percent), and retail trade (9 percent). Table 14 shows the results of two models that analyze factors associated with the probability of working for an employer that offers (1) any type of pension plan (DB or DC) or (2) a DC plan. The first column presents the variables that were included in each model. The third and fifth columns present odds ratios that are estimated for each variable in the model. The interpretation of the odds ratio for a particular variable depends on whether the variable has only two or more than two categories.For dichotomous (or dummy) variables, odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to work for an employer that offers a plan. For example, an odds ratio of 1.25 for women would mean that women are 1.25 times more likely to work for an employer that offers a plan. Odds ratios that are significantly lower than 1.00 indicate that individuals with that characteristic are less likely to work for an employer that offers a plan. For categorical variables with more than two categories, a statistically significant odds ratio that is greater/less than 1.00 indicates that individuals in that category are more/less likely to work for an employer that offers a plan than individuals in the category that is chosen as the referent or comparison category. As shown in the body of the report, before controlling for differences between men and women in demographic and occupational characteristics, a greater proportion of women worked for employers that offered plans in 2009. Interestingly, table 14 shows that after accounting for demographic and occupational characteristics, women have slightly lower odds of working for an employer that offers a DC plan than men. In fact, the positive gender effect for women is eliminated when we control for occupational characteristics using a statistical model (results not shown below). In other words, women’s higher likelihood of working for an employer that offers a plan is largely due to the types of occupations and industries in which women work. (The odds ratios for the specific occupations and industries, which are too numerous to discuss here, are listed in the table.) Odds (O) are mathematically related to but not the same as probabilities (P), that is O=P/. While dummy and categorical variables are both discrete variables, a dummy variable takes on a value of 0 or 1. A categorical variable takes a value that is one of several possible categories and there is no intrinsic ordering to the categories. We found that several other factors are associated with the likelihood of working for an employer that offers a plan. While the details are shown in the table, the factors that were positively associated with working for an employer that offers either a DB or DC plan (and that were statistically significant at the 95 percent confidence level) included age; being divorced (relative to married); education level; U.S. citizenship; working in the government or nonprofit sector (in comparison to the private sector); having 5 to 9 years of work experience (in comparison to having less than 5 years); union membership; job tenure; and firm size. Factors that were negatively associated with working for an employer that offers a plan included being never married (in comparison to being married); being a single parent; being Black, Hispanic, or Asian (in comparison to White, non-Hispanics); being a naturalized immigrant; working part time; and being self-employed. While the results across both models were generally consistent, some results were significant in one model but not the other. Table 15 shows the results of a model we estimated to analyze factors associated with whether an individual is eligible for their employer’s plan. It is presented in the same format as table 14. As shown in the body of the report, women had lower rates of plan eligibility across all 4 study years. The results of the model show that, even after controlling for demographic and occupational differences between men and women, women had significantly lower rates of eligibility in 2009. Perhaps most interesting is the odds ratio for part-time status, which indicates that part- time workers are approximately one-third as likely to be eligible for their employer’s plan as full-time workers. In addition, work experience and tenure are also significantly and positively related with eligibility. Union status is also positively associated with plan eligibility. Table 16 shows the results of two models we estimated to analyze factors associated with the probability of participating in (1) any type of pension plan (DB or DC) or (2) a DC plan. Again, it is presented in the same format as tables 14 and 15. As shown in the body of the report, before controlling for differences between men and women in demographic and occupational characteristics, a smaller proportion of women participated in an employer-sponsored pension plan. Our analysis shows that the gender differences in plan participation are largely accounted for by differences between men and women in demographic and occupational characteristics. Similar to our other models, we identify a number of factors that are related to plan participation. The factors that were positively related to participating in either a DB or a DC (and that are statistically significant at the 95 percent level) include age; education-level; being Asian (relative to whites); U.S. citizenship; working in the nonprofit or government sector (relative to the private sector); work-experience; union membership; and tenure. Factors that were negatively related to participating in a plan included being a single parent; working part-time; and being Black or Hispanic. A number of industries and occupations, too numerous to list, were statistically significant as shown in the table below. To compute median incomes and income composition for men and women in different demographic groups, we used information from the core questionnaire of the SIPP data (as described above). We used the last month of the 4-month reporting period (within each “wave”) with the assumption that individuals will more accurately recollect income from the most recent month than income from 4 months ago. To obtain an annual income estimate, we multiplied the monthly reported income by 12. The poverty rate was computed using a SIPP variable that indicates the poverty threshold for an individual’s household. The Census Bureau uses a set of money-income thresholds that vary by family size and composition to determine who is in poverty. If a family’s total income is less than the family’s threshold, then that family and every individual in it is considered in poverty. The official poverty thresholds do not vary geographically, but they are updated for inflation using Consumer Price Index (CPI-U). The official poverty definition uses money income before taxes and does not include capital gains or noncash benefits (such as public housing, Medicaid, and food stamps). All of our income composition, median, and poverty level estimates were computed at the individual level, using household-level information. In other words, median incomes were computed by applying all household income to each individual in the household and taking the median across all individuals within a certain category (e.g., gender, or gender and race). For married individuals, this means that spousal income was included in these estimates. Correspondingly, we used SIPP individual-level weights to compute our point estimates and, in conjunction with other factors, calculate the standard errors of those estimates so that we could accurately account for the complex survey design. This method might result in overstated estimates from earnings if workers do not work all 12 months of the year. These patterns held across all the years we analyzed. demographic patterns of life-expectancy and the ages of marital partners.Since women typically marry older men, and women typically have longer life-expectancies than men, it is not surprising that a sample of older individuals will include fewer married women than married men, as the spouses of older women are more likely to have died than the spouses of older men. For this reason, the sample of married older women could differ from the sample of married older men, so their household characteristics—including income—may not be the same. Further, the difference between the ages of the spouses of married men and married women could also result in different estimates of median income and income composition. For example, if women tended to be married to older men, the income composition of the household might be skewed away from earnings and towards Social Security. Conversely, if men tended to be married to younger women, a higher share of income might come from earnings. We estimated the relationship between events that occur later in life and income and assets using fixed-effects panel regressions. The main advantage of fixed-effects models is that they are designed to isolate the effect of the event from all other permanent characteristics of the individual. We estimated our models using data from the HRS, which follows households over time. Our analysis focuses on life events that occur after age 50, as the HRS follows individuals age 51 and over. It is also possible that the survey response rate was higher for married men than for married women. of observation (e.g., proportion that became divorced between period 1 and period 2). Table 19 uses the first method and presents some descriptive statistics on the women and men in our sample. Specifically, it shows the average values of characteristics for different ages for women and men. Real assets and real income. At ages 51 to 64 women and men have similar levels of assets. However, after age 65, men’s average level of household assets becomes larger than the average level for women. Men’s average levels of household income are higher than women’s at every age level. Marital status. The rates of marriage and widowhood are relatively comparable between women and men before age 65. For example, 6 percent of women and 1 percent of men younger than age 65 were widowed. However, at older ages, more women were estimated to be widowed than men. Poor health. Individuals were classified as being in poor health based on a survey question of self-reported health, which asked the individual to rate his or her health on a scale from 1 to 5, where 1 is excellent and 5 is poor. An answer of “fair” or “poor” was classified as being in poor health. As table 17 shows, rates of poor health were comparable between women and men at all age groups. Unemployment. This variable captures the percentage of individuals that responded to a labor force question as being “unemployed”. It is important to note that this is not equivalent to an unemployment rate—as individuals classified as not in the labor force were included in the denominator. Women and men were equally likely to report being unemployed. Helping parents financially or with daily activities. These variables capture the percentage of households that provided financial help or assistance with basic daily activities to either the parents of the respondent or spouse. Again, it appears that these rates were comparable for women and men. Table 18 uses the second method to show the proportion of women and men that had a life event status change during the period of analysis. As table 18 shows: Divorce/separation. During the period in which both members of the household are less than 65, less than 1 percent of men experienced divorce or separation between any of the two waves. For women, the proportion was negative – indicating that more women went from divorced or separated to married than from married to divorced or separated. Widowhood. During the earlier period, about 1 percent of women became widowed between any of the two waves. This proportion increased to more than 2 percent as the household aged and was twice the rate for men. Decline into poor health. The rate of health decline was similar for women and men. On average, approximately 2 percent of women and men reported a decline in health from one period to another. Unemployment. Very few women and men reported a change to and from unemployment in our data. Helping parents financially or with daily activities. The proportion of women’s and men’s households providing personal or financial assistance fell as the household aged. This may be because older households were less likely to have living parents requiring assistance. Percent change in real assets. In the earlier period, assets for women and men increased at a rate of about 6 percent per 2-year period. Alternatively, the rate of asset growth became negative as the household aged. Percent change in real income. In both younger and older households, incomes fell at a rate of approximately 5 percent per 2- year period, on average. In order to examine whether the effects of certain events occurring later in life differ by gender, we used fixed-effects regression models. For example, we estimated how changes in health lead to changes in household assets and income. Researchers use the fixed-effects method because much of the differences in income and wealth between households are consistent over time (as poorer households tend to stay poor and richer households tend to stay rich). The fixed-effects method sweeps away these “time invariant” differences, thus better isolating the effect of health or other life events from other aspects of households that could explain differences. In addition to the fixed-effects analysis, we also developed “cross-section” regression models. In these models, we attempted to control for a set of demographic and other variables, such as education and age that could be correlated with life events, household assets, and household income. A challenge to this approach is that many factors that affect assets and income are unobserved, and lead to mistaken conclusions. For example, if an individual earns a low wage, that may be connected with poor health and the accumulation of assets. So, while the researcher is attempting to estimate the effect of health on income, what is actually measured is the effect of income on health. In general, in our cross-section models, we found that effects were larger in magnitude than in the fixed-effects models, but these models were not as good a fit to the data as the fixed- effects models. Specifically, we estimated variations of the following equation, separately by gender: (1) Log (Household Assets or Income) = α + α + β*(poor health) + χ *(marital status) + δ*(other control variables) Where, α and α indicate fixed effects for the individual and wave. β is the effect of poor health and δ and χ are the effect of other control variables and marital status.By including a dummy variable for each wave, we attempted to control for all national-level changes that could have affected assets and income, and also have been associated with the life events. Therefore, β can be interpreted as the effect of poor health, measured as the percent difference in average assets between periods where an individual reports poor versus not-poor health. Due to the additional controls, this average percent difference is measured relative to the changes over time, and also relative to the other time-variant measures captured, such as changes in marital status. However, while some of the life-events are likely associated with the passage of time, the regression does not assume that relationship. For example, if an individual switches from poor health to good health, the fixed-effects regression will also use those transitions to estimate the size of the effect. Similarly, the fixed-effects regression will also use transitions from married to widowed, as well as widowed to married, to estimate the effect of widowhood. Other control variables that we included were age (measured as date of wave minus birth year), race and education (categorical), cohort of HRS survey, Census region, region of birth (12 categories, including non-U.S.). In general, in the cross-section models, we found that education was positively related to assets and income, while minority status was negatively related. With some slight variation, we based our choice of control variables on Coile and Milligan. (See Courtney Coile and Kevin Milligan, “How Household Portfolios Evolve After Retirement: The Effect of Aging and Health Shocks,” The Review of Income and Wealth, vol. 55 no. 2 (Malden, MA: June 2009)). In order to estimate effects in terms of percents, we estimated the effects on the log of assets or income. In addition, we transformed the coefficients to more closely approximate percent changes by taking the exponent of the estimated coefficient and subtracting 1. Regression variables were weighted by household weights. the possibility of endogenous relationships. For example, if an individual’s health declined because income fell, and not the other way around, that bias could affect our findings. Some of the life events we examined were likely correlated with changes in household structure, such as changes in marital status. However, if the income of a household falls when an individual leaves, the remaining individuals may not be worse off when it comes to resources because the household now requires fewer resources to meet its needs. To address this, we adjusted the estimated effects by household size; the household’s income and assets were scaled by the square root of the individuals in the household. The rationale for using the square root is because the effect of reducing members is diminishing (changing from 1 to 2 has a larger effect than going from 9 to 10). In addition, this analysis estimated the effect of an individual’s life event on household assets or income. We did not attempt to determine to what extent a spouse’s life event (for married individuals) may have affected household assets or income). Table 19 contains the effects of the first event we analyzed: divorce. We analyzed the effect of divorce on household assets and income, both with and without controlling for the number of people in the household. Across almost all the groups and specifications, the effect of divorce is to reduce assets and income, with larger effects for women than for men. Adjusting for household size tended to reduce the magnitude of the effects. Effect on assets. Divorce tended to reduce assets for more women than men, with comparable sizes of effects for women and men. For example, among all households, the decline in assets associated with divorce was 41 percent for women and 39 percent for men. When the size of the household was adjusted for, the size of the effect declined, but was still statistically significant. Effect on income. Divorce reduced income for both women and men, with larger effects for women than men. For example, among all households, the decline in income associated with divorce was 41 percent for women and 23 percent for men. When household size was adjusted for, the size of the effects were much smaller in magnitude. Table 20 contains the results for widowhood. As with divorce, we analyzed the effect of widowhood on household assets and income, both with and without controlling for the number of people in the household. Across almost all the groups and specifications, the effect of widowhood is to reduce assets and income, with larger effects for women than for men. Adjusting for household size tended to reduce the magnitude of the effects. Effect on assets. Widowhood reduced assets for both women and men, with larger effects for women than men. For example, among all households, the decline in assets associated with widowhood was 32 percent for women and 27 percent for men. However, part of this effect seems to be associated with the size of the household. Among the households in which at least one member was 65 and over, the decline in assets was not significant when household size was adjusted for. Effect on income. Widowhood reduced income for both women and men, with larger effects for women than men. For example, among all households, the decline in income associated with widowhood was 37 percent for women and 22 percent for men. Again, part of this effect seems to be associated with the size of the household. When household size was adjusted for, the size of the effects were much smaller in magnitude. As shown in table 21, unemployment tended to reduce assets and income, with comparable effects for women and men. The effects did not seem to dissipate when household size was adjusted for. Effect on assets. Unemployment reduced assets for both women and men, with comparable effects for women and men. For example, among all households, the decline in assets associated with unemployment was 7 percent for women and 7 percent for men. An exception to this difference was in cases in which at least one member was 65 or over. For those individuals, the decline in household assets was only 2 percent for women and 15 percent for men. Effect on income. Unemployment reduced income for both women and men, with comparable effects for women and men. For example, among all households, the decline in income associated with unemployment was 6 percent for women and 8 percent for men. In general, across the specifications, the effect of a decline into poor health tended to reduce assets and income, with comparable effects for women and men (see table 22). One notable difference however, were the larger estimated effects of men’s poor health on assets, but only in the case where both members of the household were less than 65 years of age. Specifically, we found that for individuals living in these households, poor health in men was associated with a drop in household assets of 13 percent, but 5 percent for women. In general, the magnitude of the effect on assets was in the 10 percent range for both women and men, and is statistically significant. The effects on income are about half that magnitude, but follow the same direction as the effects on assets. There is little difference in the effects when the level of assets and income are estimated with a correction for the size of the household. As shown in table 23, the results for either helping parents financially or with basic daily activities—eating, dressing, and bathing—were not as consistently significantly negative as the other life events. In the fixed- effects regression, the effect of personal assistance did not appear to be statistically significant, while the effect of financial assistance tended to be significantly positive. It may be that when households have more assets or income they are more likely to provide assistance—which could explain these findings. There is little difference in the effects when the level of assets and income are estimated with a correction for the size of the household. To further understand these relationships, we explored the characteristics of those helping their parents with the basic daily activities of bathing, dressing, and eating. We found that only 2 percent of the sample provided both financial help and help with basic daily activities. Further, those in the labor force (i.e., working or unemployed and looking for work) were more likely to help their parents with basic daily activities than those retired or not in the labor force. Michael Collins, Assistant Director; Erin M. Godtland, Senior Economist, and Jennifer Gregory, Senior Analyst, led the engagement. In addition, James Bennett, Benjamin Bolitzer, David Chrisinger, Cynthia Grant, Jean Lee, Grant Mallie, Ashley McCall, Michael Morris, Rhiannon Patterson, Mark Ramage, James Rebbe, Douglas Sloane, Jeff Tessin, Shana Wallace, and Erin White made valuable contributions.
Elderly women, who comprise a growing portion of the U.S. population, have historically been at greater risk of living in poverty than elderly men. Several factors contribute to the higher rate of poverty among elderly women including their tendency to have lower lifetime earnings, take time out of the workforce to care for family members, and outlive their spouses. Other factors could affect older women’s financial insecurity. These include the economic downturn and changing trends in pension plan offerings. In light of these circumstances, GAO was asked to examine (1) how women’s access to and participation in employer-sponsored retirement plans compare to men’s and how they have changed over time, (2) how women’s retirement income compares to men’s and how the composition of their income—the proportion of income coming from different sources—changed with economic conditions and trends in pension design, (3) how later-in-life events affect women’s retirement income security, and (4) what policy options are available to help increase women’s retirement income security. To answer these questions, GAO analyzed data from two nationally representative surveys, conducted a broad literature review, and interviewed a range of experts in the area of retirement security. GAO is making no recommendations. GAO received technical comments on a draft of this report from the Department of Labor, the Department of the Treasury and the Social Security Administration, and incorporated them, as appropriate. Over the last decade, working women’s access to and participation in employer-sponsored retirement plans have improved relative to men. Indeed, from 1998 to 2009, women surpassed men in their likelihood of working for an employer that offered a pension plan, largely because the proportion of men covered by a plan declined. Furthermore, as employers have continued to terminate their defined benefit (DB) plans and have switched to defined contribution (DC) plans, the proportion of women who worked for employers that offered a DC plan increased. Correspondingly, women’s participation rates in DC plans increased slightly over this same period while men’s participation fell, thereby narrowing the participation difference between men and women to 1 percentage point. At the same time, however, women contributed to their DC plans at lower levels than men. Although the composition of income for women age 65 and over did not vary greatly over the period—despite changes in the economy and pension system— women continued to have less retirement income on average and live in higher rates of poverty than men in that age group. The composition of women’s income varied only slightly, in part, because their main income sources—Social Security and DB benefits—were shielded from fluctuations in the market. Women, especially widows and those age 80 and over, depended on Social Security benefits for a larger percentage of their income than men. For example, in 2010, 16 percent of women age 65 and over depended solely on Social Security for income compared to 12 percent of men. At the same time, the share of household income women received from earnings increased over the period, but was consistently lower than for men. Moreover, women’s median income was approximately 25 percent lower than men’s over the last decade, and the poverty rate for women in this age group was nearly two times higher than men’s in 2010. For women approaching or in retirement, becoming divorced, widowed or unemployed had detrimental effects on their income security. Moreover, divorce and widowhood had more pronounced effects for women than for men. For example, women’s household income, on average, fell by 41 percent with divorce, almost twice the size of the decline that men experienced. For widowhood, women’s household income fell by 37 percent—while men’s declined by only 22 percent. Unemployment also had a detrimental effect on income security, though the effects were similar for women and men; household assets and income fell by 7 to 9 percent. A range of existing policy options could address some of the income security challenges women face in retirement. For example, some would expand existing tax incentives to save for retirement while others would improve access to annuities. All of these options have advantages and disadvantages that would need to be evaluated prior to implementation. For example, increasing Social Security benefits for widows could provide additional income for women who have few options to increase their retirement savings. However, increasing benefits would also increase costs to the Social Security program and have implications for its long-term solvency.
LTCI helps pay for the costs associated with long-term care services, which can be expensive. However, the number of LTCI policies sold has been relatively small—about 9 million as of the end of 2002, the most recent year of data available. To receive benefits under an LTCI policy, the consumer must not only obtain the covered services, but must also meet what are commonly referred to as benefit triggers. Most policies provide benefits under two circumstances (1) the consumer cannot perform a certain number of activities of daily living (ADL)—such as bathing, dressing, and eating—without assistance, or (2) the consumer requires supervision because of a cognitive impairment. In addition, benefit payments do not begin until the policyholder has met the benefit triggers for the length of their elimination period. Elimination periods establish the amount of time a policyholder must receive services before his or her insurance will begin making payments, for example, 30 or 90 days. Determining whether a consumer has met the benefit triggers can be complex and companies’ processes for doing so vary. In the event that a consumer’s claim for benefits is denied, the consumer generally can appeal to the insurance company. If the company upholds the denial, the consumer can file a complaint with the state insurance department or can seek adjudication through the courts. Many factors affect LTCI premium rates, including the benefits covered and the age and health status of the applicant. For example, companies typically charge higher premiums for comprehensive coverage as compared to policies without such coverage, and consumers pay higher premiums the higher the daily benefit amount and the shorter the elimination period. Similarly, premiums typically are more expensive the older the policyholder is at the time of purchase. Company assumptions about interest rates on invested assets, mortality rates, morbidity rates, and lapse rates—the number of people expected to drop their policies over time—also affect premium rates. A key feature of LTCI is that premium rates are designed—though not guaranteed—to remain level over time. While under most states’ laws insurance companies cannot increase premiums for a single consumer because of individual circumstances, such as age or health, companies can increase premiums for entire classes of individuals, such as all consumers with the same policy, if new data indicate that expected claims payments will exceed the class’s accumulated premiums and expected investment returns. Setting LTCI premium rates at an adequate level to cover future costs has been a challenge for some companies. Because LTCI is a relatively new product, companies lacked and may continue to lack sufficient data to accurately estimate the revenue needed to cover costs. For example, lapse rates have proven lower than companies anticipated in initial pricing, which increased the number of people likely to submit claims. As a result, many policies were priced too low and subsequently premiums had to be increased, leading some consumers to cancel coverage. Oversight of the LTCI industry is largely the responsibility of states. Through laws and regulations, states establish standards governing LTCI and give state insurance departments the authority to enforce those standards. Many states’ laws and regulations reflect standards set out in model laws and regulations developed by NAIC. These models are intended to assist states in formulating their laws and policies to regulate insurance, but states can choose to adopt them or not. Beyond implementing pertinent laws and regulations, state regulators perform a variety of oversight tasks that are intended to protect consumers from unfair practices. These activities include reviewing policy rates and forms to ensure that they are consistent with state laws and regulations; conducting market conduct examinations—where an examiner visits a company to evaluate practices and procedures and checks those practices and procedures against information in the company’s files; and responding to consumer complaints. Although oversight of the LTCI industry is largely the responsibility of states, the federal government also plays a role in the oversight of LTCI. For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established federal standards that specify the conditions under which LTCI benefits and premiums can receive favorable federal income tax treatment. Under HIPAA, a tax-qualified policy must cover individuals certified as needing substantial assistance with at least two of the six ADLs for at least 90 days due to a loss of functional capacity, having a similar level of disability, or requiring substantial supervision because of a severe cognitive impairment. Tax-qualified policies under HIPAA must also comply with certain provisions of the NAIC LTCI model act and regulation in effect as of January 1993. The Department of the Treasury, specifically the Internal Revenue Service (IRS), issued regulations in 1998 implementing some of the HIPAA standards. However, according to IRS officials, the agency generally relies on states to ensure that policies marketed as tax qualified meet HIPAA requirements. In 2002, 90 percent of LTCI policies sold were marketed as tax qualified. In recent years, many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. Since 2000, NAIC estimates that more than half of all states have adopted new rate setting standards. States that adopted new standards generally moved from a single standard focused on ensuring that rates were not set too high to more comprehensive standards designed primarily to enhance rate stability and provide increased protections for consumers. The more comprehensive standards were based on changes made to NAIC’s LTCI model regulation in 2000. While regulators in most of the 10 states we reviewed told us that they expect these more comprehensive standards will be successful, they noted that more time is needed to know how well the standards will work. Regulators from the states in our review also use other standards or practices to oversee rate setting, several of which are intended to keep premium rates more stable. Despite states implementing more comprehensive standards and using other oversight efforts intended to enhance rate stability, some consumers may remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy, from which company their policy was purchased, and which state is reviewing a proposed rate increase on their policy. Since 2000, NAIC estimates that more than half of states nationwide have adopted new rate setting standards for LTCI. States that adopted new standards generally moved from the use of a single standard designed to ensure that premiums were not set too high to the use of more comprehensive standards designed to enhance rate stability and provide other protections for consumers. Prior to 2000, most states used a single, numerical standard when reviewing premium rates. This standard—called the loss ratio—was included in NAIC’s LTCI model regulation. For all policies where initial rates were subject to this loss ratio standard, proposed rate increases are subject to the same standard. While the loss ratio standard was designed to ensure that premium rates were not set too high in relation to expected claims costs, over time NAIC identified two key weaknesses in the standard. First, the standard does not prevent premium rates from being set too low to cover the costs of claims over the life of the policy. Second, the standard provides no disincentive for companies to raise rates, and leaves room for companies to gain financially from premium increases. In identifying these two weaknesses, NAIC noted that there have been cases where, under the loss ratio, initial premium rates proved inadequate, resulting in large rate increases and significant loss of LTCI coverage from consumers allowing their policies to lapse. To address the weaknesses in the loss ratio standard as well as to respond to the growing number of premium increases occurring for LTCI policies, NAIC developed new, more comprehensive model rate setting standards in 2000. These more comprehensive standards were designed to accomplish several goals, including improving rate stability. Among other things, the standards established more rigorous requirements companies must meet when setting initial LTCI rates and rate increases, which several state regulators told us may result in higher, but more stable, premium rates over the long term. The more comprehensive standards were also designed to inform consumers about the potential for rate increases and provide protections for consumers facing rate increases. Table 1 describes selected rate setting standards added to NAIC’s LTCI model regulation in 2000 and the purpose of each standard in more detail. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies that were not protected by these standards. Following the revisions to NAIC’s LTCI model in 2000, many states began to replace their loss ratio standard with more comprehensive rate setting standards based on NAIC’s changes. NAIC estimates that by 2006 more than half of states nationwide had adopted the more comprehensive standards. However, many consumers have policies not protected by the more comprehensive standards, either because they live in states that have not adopted these standards or because they bought policies issued prior to implementation of these standards. For example, as of December 2006, according to our analysis of NAIC and industry information, at least 30 percent of policies in force were issued in states that had not adopted the more comprehensive rate setting standards. Further, in states that have adopted the more comprehensive standards, many policies in force were likely to have been issued before states began adopting these standards in the early 2000s. Regulators from most of the 10 states in our review said that they expect the rate setting standards added to NAIC’s model regulation in 2000 will improve rate stability and provide increased protections for consumers, though regulators also recognized that it is too soon to determine the effectiveness of the standards. Some regulators explained that it might be as much as a decade before they are able to assess the effectiveness of these standards. Regulators from 1 state explained that rate increases on LTCI policies sold in the 1980s did not begin until the late 1990s, when consumers began claiming benefits and companies were faced with the costs of paying their claims. Further, though the more comprehensive standards aim to enhance rate stability, LTCI is still a relatively young product, and initial rates continue to be based on assumptions that may eventually require revision. State regulators from the 10 states in our review use other standards— beyond those included in NAIC’s LTCI model regulation—or practices to oversee rate setting, including several that are intended to enhance rate stability. Regulators from 3 of the states in our review told us that their state has standards intended to enhance the reliability of data used to justify rate increases, and regulators from 2 states told us that they have standards to limit the extent to which LTCI rates can increase. Beyond implementing rate setting standards, regulators from all 10 states in our review use their authority to review rates to reduce the size of rate increases or to phase in rate increases over multiple years. While state regulators work to reduce the effect of rate increases on consumers, regulators from 6 states explained that increases can be necessary to maintain companies’ financial solvency. Although some states are working to improve oversight of rate setting and to help ensure LTCI rate stability by adopting the more comprehensive standards and through other efforts, there are other reasons why some consumers may remain more likely to experience rate increases than others. In particular, consumers who purchased policies when there were more limited data available to inform pricing assumptions may continue to experience rate increases. Regulators from seven states in our review told us that rate increases are mainly affecting consumers with older policies. For example, regulators from one state told us that there are not as many rate increases proposed for policies issued after the mid-1990s. Regulators in five states explained that incorrect pricing assumptions on older policies are largely responsible for rate increases. Consumers’ likelihood of experiencing a rate increase also may depend on the company from which they bought their policy. In our review of national data on rate increases by four judgmentally selected companies that together represented 36 percent of the LTCI market in 2006, we found variation in the extent to which they have implemented increases. For example, one company that has been selling LTCI for 30 years has increased rates on multiple policies since 1995, with many of the increases ranging from 30 to 50 percent. Another company that has been in the market since the mid-1980s has increased rates on multiple policies since 1991, with increases approved on one policy totaling 70 percent. In contrast, officials from a third company that has been selling LTCI since 1975 told us that the company was implementing its first increase as of February 2008. The company reported that this increase, affecting a number of policies, will range from a more modest 8 to 12 percent. Another company that also instituted only one rate increase explained that in cases where initial pricing assumptions were wrong, the company has been willing to accept lower profit margins rather than increase rates. While past rate increases do not necessarily increase the likelihood of future rate increases, they do provide consumers with information on a company’s record in having stable premiums. Finally, consumers in some states may be more likely to experience rate increases than those in other states, which officials from two companies noted may raise equity concerns. Of the six companies we spoke with, officials from every company that has instituted a rate increase told us that there is variation in the extent to which states approve proposed rate increases. For example, officials from one company told us that when requesting rate increases they have seen some states deny a request and other states approve an 80 percent increase on the same rate request with the same data supporting it. While some consumers may face higher increases than others, company officials also told us that they provide options to all consumers facing a rate increase, such as the option to reduce their benefits to avoid all or part of a rate increase. Our review of data on state approvals of rate increases requested by one LTCI company operating nationwide also indicated that consumers in some states may be more likely to experience rate increases. Specifically, since 1995 one company has requested over 30 increases, each of which affected consumers in 30 or more states. While the majority of states approved the full amounts requested in these cases, there was notable variation across states in 18 of the 20 cases in which the request was for an increase of over 15 percent. For example, for one policy, the company requested a 50 percent increase in 46 states, including the District of Columbia. Of those 46 states, over one quarter (14 states) either did not approve the rate increase request (2 states) or approved less than the 50 percent requested (12 states), with amounts approved ranging from 15 to 45 percent. The remaining 32 states approved the full amount requested, though at least 4 of these states phased in the amount by approving smaller rate increases over 2 years. (See fig. 1.) Variation in state approval of rate increase requests may have significant implications for consumers. In the above example, if the initial, annual premium for the policy was, for example, $2,000, consumers would see their annual premium rise by $1,000 in Colorado, a state that approved the full increase requested; increase by only $300 in New York, where a 15 percent increase was approved; and stay level in Connecticut, where the increase was not approved. Although state regulators in our 10-state review told us that most rate increases have occurred for policies subject to the loss ratio standard, variation in state approval of proposed rate increases may continue for policies protected by the more comprehensive standards. States may implement the standards differently, and other oversight efforts, such as the extent to which states work with companies, also affect approval of increases. The 10 states in our review have standards established by law and regulations for governing claims settlement practices. The majority of the standards, some of which apply specifically to LTCI and others that apply more broadly to various insurance products, are designed to ensure that claims settlement practices are conducted in a timely manner. Specifically, the standards are designed to ensure the timely investigation and payment of claims and prompt communication with consumers about claims. In addition to these timeliness standards, states have established other standards, such as requirements for how companies are to make benefit determinations. While the 10 states we reviewed all have standards governing claims settlement practices, the states vary in the specific standards they have adopted as well as in how they define timeliness. For example, 1 state does not have a standard that requires companies to pay claims in a timely manner. For the 9 states that do have a standard, the definition of “timely” the states use varies notably—from 5 days to 45 days, with 2 states not specifying a time frame. In addition, federal laws governing tax-qualified policies do not address the timely investigation and payment of claims or prompt communication with consumers about claims. The absence of certain standards and the variation in states’ definitions of “timely” may leave consumers in some states less protected from, for example, delays in payment than consumers in other states. (See table 2 for key claims settlement standards adopted by the 10 states in our review and examples of the variation in standards.) The states in our review primarily use two ways to monitor companies’ compliance with claims settlement standards. One way the states monitor compliance is by reviewing consumer complaints on a case-by-case basis and in the aggregate to identify trends in company practices. When responding to complaints on a case-by-case basis, regulators in some states told us that they determine whether they can work with the consumer and the company to resolve the complaint or determine whether there has been a violation of claims settlement standards that requires further action. Regulators from four states also told us that they regularly review complaint data to identify trends in company practices over time or across companies, including practices that may violate claims settlement standards. Three of these states review these data as part of broader analyses of the LTCI market during which they also review, for example, financial data and information on companies’ claims settlement practices. However, regulators in three states noted that a challenge in using complaint data to identify trends is the small number of LTCI consumer complaints that their state receives. For example, information on complaints provided by one state shows that the state received only 54 LTCI complaints in 2007, and only 20 were related to claims settlement issues. State regulators told us that they expect the number of complaints to increase in the future as more consumers begin claiming benefits. The second way that states monitor company compliance with claims settlement standards is by using market conduct examinations. These examinations may be regularly scheduled or, if regulators find patterns in consumer complaints about a company, they may initiate an examination, which generally includes a review of the company’s files for evidence of violations of claims settlement standards. Some states also coordinate market conduct examinations with other states—efforts known as multistate examinations—during which all participating states examine the claims settlement practices of designated companies. If state regulators identify violations of claims settlement standards during market conduct examinations, they may take enforcement actions, such as imposing fines or suspending the company’s license. As of March 2008, 4 of the 10 states in our review reported taking enforcement actions against LTCI companies for violating claims settlement standards and 7 reported having ongoing examinations into companies’ claims settlement practices. In addition to their efforts to monitor compliance with claims settlement standards, regulators from six of the states in our review reported that their state is considering or may consider adopting additional consumer protections related to claims settlement. The additional protection most frequently considered by the state regulators we interviewed is the inclusion of an independent review process, which would allow consumers appealing LTCI claims denials to have their issue reviewed by a third party independent from their insurance company without having to engage in legal action. Also, a group of representatives from NAIC member states was formed in March 2008 to consider whether to recommend developing provisions to include an independent review process in the NAIC LTCI models. Such an addition may be useful, as regulators from three states told us that they lack the authority to resolve complaints involving a question of fact, for example, when the consumer and company disagree on a factual matter regarding a consumer’s eligibility for benefits. Further, there is some evidence to suggest that due to errors or incomplete information companies frequently overturn LTCI denials during the appeals process. Specifically, data provided by four companies we contacted showed that the average percentage of denials overturned was 20 percent in 2006, ranging from 7 percent in one company to 34 percent in another. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the committee may have. For future contacts regarding this statement, please contact John E. Dicken at (202) 512-7114 or at dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Kristi Peterson, Assistant Director; Krister Friday; and Rachel Moskowitz made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As the baby boom generation ages, the demand for long-term care services is likely to grow and could strain state and federal resources. The increased use of long-term care insurance (LTCI) may be a way of reducing the share of long-term care paid by state and federal governments. Oversight of LTCI is primarily the responsibility of states, but over the past 12 years, there have been federal efforts to increase the use of LTCI while also ensuring that consumers purchasing LTCI are adequately protected. Despite this oversight, concerns have been raised about both premium increases and denials of claims that may leave consumers without LTCI coverage when they begin needing care. This statement focuses on oversight of the LTCI industry's (1) rate setting practices and (2) claims settlement practices. This statement is based on findings from GAO's June 2008 report entitled Long-Term Care Insurance: Oversight of Rate Setting and Claims Settlement Practices (GAO-08-712). For that report, GAO reviewed information from the National Association of Insurance Commissioners (NAIC) on all states' rate setting standards. GAO also completed 10 state case studies on oversight of rate setting and claims settlement practices, which included structured reviews of state laws and regulations, interviews with state regulators, and reviews of state complaint information. GAO also reviewed national data on rate increases implemented by companies. Many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. NAIC estimates that since 2000 more than half of states nationwide have adopted new rate setting standards. States that adopted new standards generally moved from a single standard that was intended to prevent premium rates from being set too high to more comprehensive standards intended to enhance rate stability and provide other protections for consumers. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies not protected by these standards. Regulators in most of the 10 states GAO reviewed said that they think the more comprehensive standards will be effective, but that more time is needed to know how well the standards will work. State regulators in GAO's review also use other standards or practices to oversee rate setting, several of which are intended to keep premium rates more stable. Despite state oversight efforts, some consumers remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy, from which company their policy was purchased, and which state is reviewing a proposed rate increase on their policy. Regulators in the 10 states GAO reviewed oversee claims settlement practices by monitoring consumer complaints and conducting examinations in an effort to ensure that companies are complying with standards. Claims settlement standards in these states largely focus on timely investigation and payment of claims and prompt communication with consumers, but the standards adopted and how states define timeliness vary notably across the states. Regulators told GAO that reviewing consumer complaints is one of the primary methods for monitoring companies' compliance with state standards. In addition to monitoring complaints, these regulators also said that they use examinations of company practices to identify any violations in standards that may require further action. Finally, state regulators in 6 of the 10 states in GAO's review reported that their states are considering additional protections related to claims settlement. For example, regulators in several states said that their states were considering an independent review process for consumers appealing claims denials. Such an addition may be useful as some regulators said that they lack authority to resolve complaints where, for example, the company and consumer disagree on a factual matter, such as a consumer's eligibility for benefits. In commenting on a draft of GAO's report issued on June 30, 2008, NAIC compiled comments from its member states. Member states said that the report was accurate but seemed to critique certain aspects of state regulation, including differences among states, and make an argument for certain reforms. The draft reported differences in states' oversight without making any conclusions or recommendations.
In 1982, the Congress enacted The Veterans’ Administration and Department of Defense Health Resources Sharing and Emergency Operations Act (Public Law 97-174) to promote greater sharing of health care resources and thus achieve greater efficiencies in the DOD and VA health care systems. One of the main objectives of this legislation was to reduce the costs of operating those systems by minimizing duplication and underuse of health care resources. Under this legislation, the DOD and VA entered into health care resource-sharing agreements, which allowed active-duty and eligible former service members to receive care in VA hospitals and vice versa. However, legislation did not provide for the use of CHAMPUS funds to reimburse VA under sharing agreements nor permit VA to treat dependents of active-duty and eligible former members. In a 1988 GAO report, we recommended that the Congress enact legislation specifically authorizing (1) the use of CHAMPUS funds to purchase care for CHAMPUS beneficiaries from VA medical centers and (2) the treatment of all categories of dependents at VA hospitals. Legislation accomplishing these two purposes was passed in 1989 and 1992, respectively. Under health resource-sharing agreements using CHAMPUS funds, CHAMPUS beneficiaries can receive services from the VA in noncatchment areas through authority provided in sharing agreements between DOD and VA headquarters officials and in catchment areas through local agreements between military hospital commanders and the VA medical center directors subject to headquarters approval. These agreements offer DOD the potential for (1) saving CHAMPUS funds because DOD will reimburse VA less than what it pays the private sector for similar services and (2) improving access to services for their beneficiaries. The VA can benefit by using the extra revenue generated from CHAMPUS funds to improve services to veterans. The information we developed for this report came from three sources: (1) a review of sharing legislation; (2) an examination of the various drafts of the CHAMPUS/Asheville VAMC sharing agreement, the DOD/VA memorandum of understanding, and related documents; and (3) discussions with DOD and VA officials responsible for the sharing program. The discussions focused on the reasons for delays in developing CHAMPUS/VA sharing agreements and in using CHAMPUS funds for sharing agreements between military hospitals and VA hospitals. We performed this work at the Office of the Assistant Secretary of Defense (Health Affairs) and VA headquarters in Washington, D.C.; the U.S. Army Medical Command (a component of the Army Surgeon General’s office) in San Antonio, Texas; CHAMPUS headquarters in Aurora, Colorado; and the Asheville VAMC (because it was negotiating the first CHAMPUS/VA sharing agreement). We supplemented these visits with telephone discussions with officials from the Air Force Surgeon General’s office and the Navy Bureau of Medicine (Surgeon General’s office) in Washington, D.C. We did our work from August 1993 to September 1994 in accordance with generally accepted government auditing standards. Differences between DOD and VA over provisions of a memorandum of understanding and the CHAMPUS/Asheville VAMC sharing agreement prevented CHAMPUS beneficiaries from receiving services in VA hospitals in noncatchment areas through the use of CHAMPUS funds. The differences over sharing provisions arose shortly after the passage of the 1989 legislation authorizing the use of CHAMPUS funds for treatment in VA hospitals and they continued throughout most of 1993. Due in large part to the intervention of the Chairman, House Committee on Veterans’ Affairs in October 1993, DOD and VA resolved their differences. Both parties signed (1) a sharing agreement in December 1993 to treat CHAMPUS-eligible beneficiaries in the Asheville VAMC and (2) a memorandum of understanding in February 1994 providing an overall framework for future CHAMPUS/VA health care resource-sharing agreements. The differences between DOD and VA centered mainly on whether VA’s hospitals would be treated more as military hospitals or as CHAMPUS civilian providers. These differences led to many revisions of the agreement. More specifically, according to VA officials, DOD wanted VA hospitals to follow CHAMPUS procedures for seeking reimbursement by filing claims with CHAMPUS fiscal intermediaries and collecting copayments and deductibles from beneficiaries. Also, DOD wanted to use its own payment methodology, the diagnosis related group system, for reimbursing VA hospitals for the care they provided. Further, DOD wanted VA to adhere to CHAMPUS standards for utilization review and quality assurance. VA, on the other hand, wanted its hospitals to be treated as military hospitals, which have no copayments and deductibles. VA also wanted to bill the military services directly and not use fiscal intermediaries, and it wanted to bill CHAMPUS on a per diem system rather than the diagnosis related group system. In addition, VA wanted to use its own utilization management and quality review systems. During 1993, the two agencies exchanged several proposals, and, at one point, it appeared that they had reached an agreement. In fact, representatives from the Asheville VAMC and DOD signed a sharing agreement in July 1993. However, DOD subsequently rescinded the agreement because, according to DOD health officials, the person signing for DOD did not have the authority to do so. It was not until the Chairman, House Committee on Veterans’ Affairs, called a meeting of DOD and VA officials in October 1993 and expressed frustration with the delays that any substantive progress occurred. By December 23, 1993, both DOD and VA had signed the CHAMPUS/Asheville VAMC sharing agreement, and the Asheville VAMC began treating CHAMPUS patients in February 1994. Under the agreement, the Asheville VAMC is treated as a CHAMPUS provider instead of a direct care provider; it collects CHAMPUS copayments and deductibles, and it bills through CHAMPUS fiscal intermediaries. CHAMPUS reimburses claims submitted by the Asheville VAMC for hospital inpatient charges at a 5-percent discount off the amount payable to civilian providers under the CHAMPUS diagnosis related group-based payment system; it will reimburse professional services claims at a 5-percent discount off the CHAMPUS maximum allowable charge. Although the Asheville VAMC will maintain a utilization review and quality assurance system, it will also be subject to CHAMPUS utilization review and quality assurance requirements. By February 3, 1994, both DOD and VA had signed a memorandum of understanding establishing a general policy and framework for subsequent CHAMPUS/VA health care resource-sharing agreements. To date, however, neither DOD nor VA has conducted a systemwide search to identify noncatchment areas with VA hospitals where sharing agreements can be implemented. Although a July 1994 VA directive encouraged its medical centers to take advantage of the opportunity to treat CHAMPUS beneficiaries, DOD officials told us that they will wait and see how the CHAMPUS/Asheville VAMC agreement fares before entering into additional sharing agreements. As of July 1994, DOD and VA were also developing a memorandum of understanding to establish policies and guidelines for VA to provide services to CHAMPUS beneficiaries in areas of the country where DOD has contracted with private companies to manage CHAMPUS beneficiaries’ health care. This particular memorandum of understanding would permit DOD contractors to contract with VA health care facilities. VA signed the memorandum of understanding in May 1994 and sent it to DOD for review. As of July 1994, the Office of the Assistant Secretary of Defense (Health Affairs) was reviewing it. In addition to the delay in implementing CHAMPUS/VA sharing agreements in noncatchment areas, such as Asheville, North Carolina, military hospital commanders in DOD catchment areas have not proposed using CHAMPUS funds for sharing agreements between their hospitals and VA hospitals. The commanders have not proposed using CHAMPUS funds for buying VA services through sharing agreements because they have been unclear about the interagency sharing program and their roles and authorities under it. The military services allocate CHAMPUS funds to military hospital commanders who are responsible for managing the care of all CHAMPUS beneficiaries in their catchment areas. The Army began allocating CHAMPUS funds to its hospitals in fiscal year 1992 and, in fiscal year 1993, expanded the allocations to all its U.S. hospitals except for three in California and one in Hawaii. In fiscal year 1994, Army hospitals were allocated about $540 million in CHAMPUS funds. The Air Force and Navy began allocating CHAMPUS funds to their hospitals in fiscal year 1994 when the Air Force allocated $476 million and the Navy allocated $356 million. Hospital commanders may use these funds to enhance and expand services available to CHAMPUS beneficiaries in their hospitals or to purchase services from outside providers, including sharing with VA. The intent is to use CHAMPUS money in the most cost-effective manner. However, all three services told us that their hospital commanders have not used any CHAMPUS funds for sharing agreements with VA. Further, as in noncatchment areas, DOD and VA have not done a comprehensive search of locations where sharing agreements using CHAMPUS funds can be implemented. Officials from the military services and the Office of the Assistant Secretary of Defense (Health Affairs) stated that military hospital commanders have the authority to submit proposals for using CHAMPUS funds for sharing agreements between their hospitals and VA hospitals if they so choose. However, these officials also said that, while no restrictions exist against using CHAMPUS funds for such sharing, neither do instructions exist for using CHAMPUS funds for such sharing. Further, these officials stated that military hospital commanders do not understand that they can propose using CHAMPUS funds for sharing agreements. Both DOD and VA can benefit from sharing agreements between CHAMPUS and VA hospitals and also between military and VA hospitals. Implementation of the sharing agreements, however, was delayed by the inability of DOD and VA officials to agree on sharing provisions and procedures. Also, DOD and VA have not engaged in a systemwide identification of sharing opportunities using CHAMPUS funds. With the overall memorandum of understanding in place and the first CHAMPUS/VA sharing agreement signed, the necessary structure now exists for further sharing agreements. To take advantage of sharing benefits, we believe DOD must make its hospital commanders more aware of their authority to propose using CHAMPUS funds to buy VA services. Additionally, DOD should provide guidance to military hospital commanders on how to develop and implement sharing agreements. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) and the military services to fully inform and explain to military hospital commanders the authority to propose using CHAMPUS funds for sharing agreements with VA and their roles and authorities under this program, to provide specific instructions on developing and implementing such agreements, and to identify sharing opportunities in which CHAMPUS funds can be used to buy available VA services. Similarly, we recommend that the Secretary of Veterans Affairs direct VA medical center directors to actively identify available VA services that may be candidates for sharing agreements with DOD and to communicate such information to the relevant DOD hospital commander. DOD and VA provided written comments on a draft of this report (apps. I and II). DOD agrees that the sharing of health care resources between the DOD and VA is a worthwhile approach that can result in overall efficiencies for both agencies. DOD does not agree, however, that disagreements between DOD and VA have delayed the implementation of sharing agreements. Following are other DOD comments: The progress of the Asheville agreement will be reviewed and possible additional sharing opportunities will be discussed in October 1994 by the VA/DOD Health Care Resources Sharing Policy and Operations Subcommittee; Guidance is being developed for issuance to the military services to evaluate the possibility and feasibility of using and sharing medical resources when it is cost-effective to do so; and A new DOD Instruction on the VA/DOD Health Care Resources Sharing Program is being developed, and its issuance is anticipated by the end of fiscal year 1995. In our view, the disagreements between DOD and VA did delay the implementation of sharing agreements using CHAMPUS funds. These disagreements, as described in our report, are well documented and did not get resolved until after the Chairman of the House Committee on Veterans’ Affairs intervened. We believe that the DOD actions listed above are good steps. However, until they are fully implemented, we believe our recommendations remain valid. To date, neither military hospital commanders nor regional lead agentshave been actively pursuing sharing agreements because, as they stated to us, they are uncertain about their roles and authorities under the CHAMPUS sharing program. They believe they need guidance on the requirements pertaining to CHAMPUS sharing agreements. VA agreed with our overall conclusion that VA and DOD would benefit from sharing agreements using CHAMPUS funds. However, VA disagreed with our draft report recommendation that the VA Secretary direct VA medical center directors to identify sharing agreements in which CHAMPUS funds can be used to buy available VA services. In VA’s view, it should be DOD’s—not VA’s—responsibility to prioritize the needs of CHAMPUS beneficiaries. Further, VA stated that its July 1994 policy directive strongly encourages its medical centers to take advantage of the opportunity to treat CHAMPUS beneficiaries under sharing authority in situations where capacity is available and service to veterans can be enhanced. We recognize that DOD has responsibility for determining CHAMPUS priorities and needs. Similarly, we recognize that the recent VA policy directive is a strong positive indicator of its commitment toward encouraging sharing with DOD using CHAMPUS funds. The intent of our recommendation was to have medical center directors actively identify services that are available to DOD and to communicate such information to the relevant DOD hospital commander. We have clarified our recommendation along these lines. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of Defense; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. If you have any questions concerning the contents of this report, please call me at (202) 512-7101. Other major contributors to this report were Stephen P. Backhus, Assistant Director, Robert P. Pickering, Senior Analyst, and Donald C. Hahn, Advisor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the extent to which Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) funds are being used for health care resource-sharing agreements between the Departments of Veterans Affairs (VA) and Defense (DOD). GAO found that: (1) in February 1994, after nearly 3 years of negotiation, VA and DOD agreed on a framework for VA to treat CHAMPUS-eligible beneficiaries and receive reimbursement from CHAMPUS funds; (2) implementation of CHAMPUS/VA sharing agreements has been delayed because of disagreements between DOD and VA over VA hospital requirements; (3) neither DOD nor VA has conducted a systemwide search to identify additional opportunities for sharing agreements; (4) potential sharing opportunities have been missed because DOD hospital commanders have not used CHAMPUS funds for sharing agreements between their hospitals and VA hospitals and are unclear about their authority to do so; and (5) DOD needs to clarify the authority of DOD hospital commanders to propose sharing agreements using CHAMPUS funds, and it needs to provide instructions on developing and implementing such agreements.
Of the 45 IHS hospitals, 28 are directly operated by IHS, and 17 are operated by tribes through funds provided by IHS (see fig. 3). Specifica under the Indian Self-Determination and Education Assistance Act, as amended, IHS provides funds to tribes to run their own hospitals through self-determination contracts or self-governance compacts. For exampl the tribes in Alaska operate 7 regional hospitals and 165 village clinic s, mainly through a variety of regional health consortiums that provide services to groups of tribes. These self-determination contracts and self-governance compacts implement the act’s commitment to effective and meaningful participation by the Indian people in the planning, conduct, and administration of health programs and services. IHS manages its facilities and staff, including the hospitals it directly operates and its direct staff, through the Indian Health Manual, among other thin This document serves as the primary reference for IHS employees on IHS-specific policy and procedures. In accordance with the Indian Self-Determination and Education Assistance Act as amended, howeve the self-determination contracts and self-governance compacts under which tribes operate hospitals do not generally require compliance with IHS policy. Therefore, IHS policies and procedures—including those laid gs. r, out in the Indian Health Manual—do not generally apply to tribally operated facilities, although they can be used as models on which to base local tribal protocols. With regard to sexual assault, IHS’s Indian Health Manual states tha person cannot give consent to sexual contact if she or he is forced, threatened, coerced, drugged, inebriated, or unconscious; has certain disabilities; or is a minor. We use the term sexual assault to refer to the federal sex abuse felonies and attempts to commit them—that is, sexual abuse and aggravated sexual abuse, abusive sexual contact, or sexualabuse of children. This category includes what is commonly known as molestation and rape, including (1) cases where the alleged perpetrator uses force or threats, renders the victim unconscious, or administ ers drugs or other intoxicants that substantially impair the victim and (2) cases where the victim is incapable of appraising the nature of conduct or is physically incapable of declining to participate or of communicating unwillingness to engage in the sexual act. With regar domestic violence, IHS’s Indian Health Manual states that dome violence can involve physical, sexual, emotional, economic, or psychological actions or threats of actions that influence another person. stic Domestic violence includes any behaviors that intimidate, manipulate, humiliate, isolate, frighten, terrorize, coerce, threaten, blame, hurt, inju or wound someone. We use the term domestic violence to refer to all major crimes as defined in the Major Crimes Act between intimate partners or family members, including elders and spouses. Domestic violence also includes major crimes against children that are not sexual in nature. A medical provider specially trained in medical forensic examination may perform such an exam in cases of sexual assault or domestic violence, and law enforcement officers may interview the victim for his or her account of what happened. Medical providers typically perform such exams only for acute cases of sexual assault, where the assault occurre within the previous 72 to 96 hours—when such evidence is considered rson’s most viable—because physical and biological evidence on a pe body or clothes degrades over time, becoming unviable or too contaminated to be used. The standard of practice for how long such evidence is viable changes as scientific advancements are made, with some jurisdictions now performing medical forensic exams up to 7 days after an assault. In terms of sexual assaults, Justice’s protocols descr two types of specially trained medical providers who conduct sexual assault medical forensic exams:  Sexual assault nurse examiner (SANE): a registered nurse who has received specialized education and has fulfilled clin to perform sexual assault medical forensic exams.  Sexual assault forensic examiner: a health care provider, including a physician or physician assistant, who has been specially educated ult and has completed clinical requirements to perform sexual assa medical forensic exams (in the same way a nurse is trained to become a SANE). The term SANE refers to registered nurses, a category including nurse midwives and other advanced practice nurses, among other providers; the term sexual assault forensic examiner refers more broadly to medical providers including registered nurses plus physicians, physician assistants, and nurse practitioners. Justice’s protocol encourages certification of SANEs, but certification as a SANE is available only to registered nurses. No such national or international certification ex sexual assault forensic examiners who are not registered nurses. Registered nurses can be certified as SANEs through the Intern Association of Forensic Nurses to perform exams for adult and ational adolescent sexual assault victims or to perform exams in cases of sexual assault of children who have not reached puberty. Nurses can become certified by meeting the association’s eligibility requirements; completing tion a didactic training curriculum; and successfully completing a certifica examination covering several topics, such as how to assess sexual assault patients, how to collect and document evidence in a way that protects the evidence’s integrity, and how to testify about findings or chain o are of custody. Beyond cases of sexual assault, medical providers wh specially educated as forensic nurse examiners are able to collect forensic evidence for a variety of crimes other than or in addition to t exual assault, such as in injury associated with domestic involving s violence. Additionally, for child victims, medical providers may perform medical forensic exams and gather medical history in the hospital, or the child may be interviewed elsewhere at a child-specific facility such as a child advocacy center. Such facilities typically use a multidisciplinary, team approach to minimize the number of times a child is interviewed and to ensure that those individuals involved in the child’s life, such as parents or guardians and social services providers, are working together. The jurisdiction in Indian country in almost all states where IHS or tribes operate hospitals. When the alleged perpetrator of a crime in Indian country is an Indian, tribal governments also have criminal jurisdiction.As a result, the FBI, the Bureau of Indian Affairs, or tribal investigators conduct criminal investigations of sexual assault and domestic violence. Once the investigation or preliminary facts are reviewed, the decision is made as to whether the investigation should be referred to the U.S. Attorneys’ Offices, the tribe, or both for possible prosecution. Prosecut in the U.S. Attorneys’ Offices decide whether to accept the matter for criminal prosecution in federal court. We previously reported that receip of a law enforcement referral does not mean that a prosecutable case t exists at the time the referral is made and that, upon further investigation, prosecutors may file the matter for prosecution as a case in court, declineto prosecute the matter, or refer the matter to tribal prosecutors. we As reported in February 2011, because of tribes’ limited jurisdiction and sentencing authority, tribes often rely on the federal government to investigate and prosecute serious offenses, since a successful federa an tribal courts might prosecution could result in a longer sentence th impose, even where tribal jurisdiction exists. In July 2011, Justice sent a letter to the President of the Senate and the Speaker of the House of Representatives to consider a proposal to, among other things, extend tribal criminal jurisdiction to non-Indians who commit domestic violence or dating violence in Indian country. IHS has limited information on the ability of IHS and tribally operated hospitals to collect and preserve medical forensic evidence in cases of sexual assault and domestic violence, as needed for criminal prosecution—that is, on the hospitals’ ability to offer medical forensic services. To collect this information, we surveyed the 45 IHS and tribally operated hospitals and found that the ability to provide these services varies from hospital to hospital, ranging from providing a broad array of on-site services, including performing medical forensic exams to collect physical and biological evidence, to choosing to refer patients to other facilities for such exams. We also found that the services available at a hospital generally developed without direction from IHS headquarters and have fluctuated over time. In addition, the utility of such evidence in any subsequent criminal prosecution depends on hospital staff’s properly securing and storing physical evidence, which may in turn depend largely on coordinating with law enforcement agencies. IHS headquarters had limited information on the ability of its facilities to provide medical forensic services. We found that IHS could not give us comprehensive information about which of its facilities—including hospitals and clinics—provided medical forensic services for victims of sexual assault and domestic violence, although IHS officials identified hospitals as the facilities most likely to provide such services. IHS headquarters also could not identify how many providers at IHS hospitals have had SANE training or certification. In addition, we found that IHS headquarters does not centrally track the number of medical forensic exams performed at its facilities. In analyzing electronic data obtained from IHS headquarters on procedures done at the hospitals, we found that because of the way hospitals record these data, it is not possible to accurately isolate medical forensic exams from other medical activities related to incidents of sexual assault or domestic violence. IHS does, however, keep centralized data on where victims of sexual assault and domestic violence were seen and on the primary purpose of these patients’ visits. The results of our survey of all 45 IHS and tribally operated hospitals showed that some hospitals typically provide medical forensic exams on site for both adult and child victims of sexual assault, others typically perform these exams for either adults or children but not both, and still others refer most or all sexual assault victims to other facilities (see table 2). Specifically, 26 of the 45 hospitals reported that they typically perform sexual assault medical forensic exams for adults, children, or both. Those hospitals reporting that they perform these exams only for adults refer all children to other facilities, and hospitals performing exams only for children refer all adults to other facilities. Additionally, all IHS and tribally operated hospitals reporting that they typically provide exams on site also aim to have staff present or on call so they can offer these services 24 hours a day, 7 days a week. Two hospitals also explained that they use traditional healing practices and objects when treating sexual assault victims (see fig. 4). The remaining 19 hospitals reported that they generally refer all adults and children to other facilities for these exams. Among the seven hospitals that typically perform medical forensic exams for both adults and children, one tribally operated hospital in Alaska has a dedicated coordinator who has received SANE training and is available to perform exams for both adults and children 24 hours a day, 7 days a week. A victim of sexual assault who arrives at this hospital can typically be examined within a short time and in a room dedicated to sexual assault exams. Similarly, an IHS hospital in Arizona has a group of approximately 14 nurses and doctors who have received specialized training in sexual assault medical forensic exams, as well as a room largely dedicated to these exams. When a sexual assault victim arrives at this hospital, hospital staff contact 1 of the 14 nurses or doctors to perform the exam or, if none of these medical providers is present, a predesignated backup provider is called on. Children requiring an exam generally see a provider, when available, who has undergone specialized training in pediatric medical forensic exams. A total of 19 of 45 hospitals reported typically performing medical forensic exams for either adult or child victims of sexual assault but not for both. For example, a South Dakota IHS hospital—which offers medical forensic services 24 hours a day, 7 days a week, with providers on 24-hour call— typically performs medical forensic exams for adults but not children. When an adult victim arrives, the emergency room does an initial medical screening and then calls one of three SANE-trained nurses to perform the medical forensic exam. But because this hospital does not have a provider trained to do these exams for children, it refers all child victims to a hospital in Pierre, which is 2 hours away by car, or to a hospital in Sioux Falls, which is 4 hours away. In contrast, an IHS hospital in New Mexico performs exams only for children. The providers at this hospital are available from 8 a.m. to 4:30 p.m. on weekdays and on call during nights and weekends; overall coverage is 24 hours a day, 7 days a week. Hospitals that we categorized as being in remote areas are more likely to perform medical forensic exams and less likely to refer victims elsewhere for service than IHS and tribally operated hospitals taken as a whole. Of the 34 hospitals categorized as remote, 22 hospitals reported that they are able to perform medical forensic exams for adults, children, or both; 12 of the 34 hospitals reported referring victims to other facilities. In contrast, the proportions are reversed among the 11 hospitals we categorized as urban, with 7 of them reporting that they refer all sexual assault victims to other facilities for exams (see fig. 5 for map of hospitals). For example, officials from an IHS hospital in the Phoenix, Arizona, area explained during a site visit that the hospital sees too few sexual assault cases to warrant having its own staff trained in performing medical forensic exams; in the officials’ view, it makes more sense for the hospital to leverage existing resources by referring victims to a nearby facility offering medical forensic services. IHS and tribally operated hospitals vary not only in whether and for whom they can provide medical forensic services but also in the training their providers have received (see table 3). Of the 26 hospitals that typically perform medical forensic exams, 20 reported having providers who received specialized training or certification in sexual assault medical forensic exams. The remaining 6 hospitals reported offering medical forensic exams even if the providers performing the exams have not received this specialized training. In fact, several medical providers told us that traveling doctors and nurses, who temporarily work at an IHS hospital for a few weeks or months, may perform these medical forensic exams on site even if they have not received this specialized training. In discussions with hospital officials, we also found that hospitals referring sexual assault victims—whether adults or children—to other facilities for medical forensic exams may do so because they do not have medical providers on staff with this specialized training. Many of the hospitals we surveyed reported that they typically perform medical forensic exams in cases of domestic violence. They may do so only in cases of domestic violence that also include a sexual component or, occasionally, when the injuries sustained from a discrete domestic violence incident without a sexual component are severe. Officials at several hospitals explained that for discrete domestic violence incidents (those that do not include a sexual component), law enforcement officers usually collect evidence, such as photographs of bruises or other injuries, for use in court. For example, officials at two separate hospitals explained that in cases of domestic violence, law enforcement officers take photographs of physical injuries, and medical providers treat any injuries requiring medical attention. In general, efforts to provide medical forensic services at the local level have fluctuated over time and have received limited funding from IHS. In discussions with hospital officials, we found that the provision of medical forensic services generally developed at a grassroots level, rather than in response to an explicit requirement from IHS headquarters. Local medical providers chose to provide such exams in response to an unmet need for such services in their area, not because IHS headquarters directed them to do so. For example, a nurse at one hospital explained that she and five other nurses attended SANE training after recognizing that medical providers at the hospital were uncomfortable doing sexual assault medical forensic exams. Additionally, an IHS official at another hospital explained that his staff began providing medical forensic services after the area office requested volunteers to pilot providing such services to better meet the area’s needs. We also found that the ability of an IHS or tribally operated hospital to offer medical forensic services has fluctuated over time. Some hospitals, for example, have been able to sustain or even expand their medical forensic services. In contrast, other hospitals have lost staff who were willing or trained to perform medical forensic exams and ceased offering these exams entirely or waited until new staff could be hired or trained. For example, officials from one hospital explained during a follow-up discussion with us that they recently ceased performing sexual assault medical forensic exams for adults when a shift in staffing resources left the hospital’s emergency room without providers specially trained in performing such exams. Consequently, the hospital now performs medical forensic exams only for children and refers adult victims to a private hospital in a nearby city, which helps facilitate more consistent and timely evidence collection, according to a law enforcement official. Similarly, medical providers explained during a site visit that after the sole provider of medical forensic exams in a remote Alaskan community left, the hospital ceased offering medical forensic exams because none of its remaining staff had specialized training. As a result, all adults and children have since been flown several hours away to Anchorage to receive medical forensic exams. Given the importance of providing medical forensic services locally, however, the hospital staff said that they recently sent several staff for training in sexual assault medical forensic exams and hired someone to serve as a coordinator for this effort. Furthermore, efforts by IHS headquarters to fund medical forensic services have been limited. The agency has provided some funding for training and equipment to hospitals or staff, but this funding has been infrequent or limited, according to IHS officials. Specifically,  Pilot program. In 2002 and 2003, IHS used a grant from Justice to fund two of its hospitals—one in Shiprock, New Mexico, and the other in Pine Ridge, South Dakota—to pilot offering medical forensic exams for adult victims of sexual assault. As part of this pilot program, the hospitals received funding to send their providers to SANE training and to purchase equipment needed for medical forensic exams, such as digital cameras. A hospital official at one of these hospitals explained that it still offers medical forensic exams and, to better meet patients’ needs, is expanding its services to also include a clinic more centrally located on the vast reservation, to provide services closer to patients’ homes. An IHS official at the other pilot-program hospital explained that it ceased offering medical forensic exams in 2007 after too many of its specially trained medical forensic examiners left. This hospital now sends its patients across state lines to a private provider.  Limited funds for training or equipment. IHS has at times paid for staff at some of its hospitals to receive SANE training, but such funding was not part of a comprehensive effort to develop medical forensic capacity at IHS facilities. From fiscal year 2003 through fiscal year 2011, IHS provided $45,000 for three training sessions for 60 providers. But agency officials also explained that IHS has provided no additional funding for hospitals to purchase equipment to conduct these exams. According to staff from one IHS hospital, they have had to use a digital camera belonging to the local Bureau of Indian Affairs law enforcement office to photographically document physical injuries as evidence because they did not have funding to purchase their own camera. IHS Domestic Violence Prevention Initiative. IHS received a $7.5 million appropriation for its domestic violence prevention initiative in fiscal year 2009 and another $10 million appropriation in fiscal year 2010. The Domestic Violence Prevention Initiative expands prevention, advocacy, outreach, and medical forensic services in cases of domestic violence and sexual assault. Of this total funding, $3.5 million funded medical forensic services such as exams, and the remaining funded prevention, advocacy, outreach, and coordination. In fact, of the 65 projects IHS funded through this initiative, 8 projects aimed to use this money for improving medical forensic services at IHS or tribally operated hospitals. Further, seven of these eight projects funded hospitals that already had some staff on board who were specially trained in providing sexual assault medical forensic exams. The specific policies or procedures that IHS has developed to preserve medical forensic evidence vary from hospital to hospital and may depend greatly on coordination with the law enforcement officers who take possession of the evidence for use in the criminal justice system. Improperly securing medical forensic evidence or improperly maintaining its chain of custody—that is, the process that demonstrates the chronological documentation of the collection, custody, control, transfer, analysis, and disposition of the evidence—can undermine the evidence’s usefulness in a criminal investigation or prosecution. Consequently, according to Justice protocols, it is imperative to properly preserve the evidence collected during a medical forensic exam. Proper preservation includes, among other things, securing the physical evidence from contamination or adulteration, as well as properly following and documenting the chain of custody. We found that some hospitals had specific procedures in place for storing and securing physical evidence, and others did not. In discussions with law enforcement officers and hospital staff, we found that the way a hospital does or does not preserve the medical forensic evidence it collects, such as biological materials or statements from victims, largely depends on the extent or type of coordination with law enforcement. For example, at one hospital, providers and law enforcement officers told us they jointly developed a protocol to store evidence from completed exams in a locked cabinet to which only law enforcement officers have the key. This protocol ensures that if a law enforcement officer cannot immediately take possession of the evidence, it is nevertheless stored in a fashion that properly maintains the chain of custody. Similarly, an official at another hospital explained that medical forensic evidence is stored in a locked filing cabinet in the SANE coordinator’s office until a law enforcement officer signs a release form to take possession of it—an arrangement developed between the hospital and law enforcement to better maintain the chain of custody. In other communities, multidisciplinary groups—such as sexual assault response teams, which coordinate community efforts related to cases of adult sexual assault, or multidisciplinary teams established by prosecutors for cases involving children—provide opportunities for hospital staff to develop evidence preservation procedures. For example, officials from an IHS hospital in a mandatory Public Law 280 state told us that its new sexual assault response team was instrumental in determining the most appropriate law enforcement agency—tribal, local, or county—to call to take possession of medical forensic evidence. Additionally, some hospital officials told us that they do not specifically coordinate with law enforcement or had no specific evidence preservation procedures because they assume that an officer will immediately take possession of any medical forensic evidence collected. Such assumptions do not always hold, however, such as if the law enforcement officer is called away to investigate another crime or cannot wait in the hospital for completion of the multihour medical forensic exam. Differences in how hospitals preserve medical forensic evidence may also stem in part from the type of training received by those who perform medical forensic exams. For example, SANE training covers securing evidence and maintaining its chain of custody. Providers who do not receive such specialized training may be relying on following the instructions contained in an evidence collection kit—a process that some stakeholders told us may miss important steps. Since enactment of the Indian Health Care Improvement Reauthorization and Extension Act of 2009 (on March 23, 2010) and the Tribal Law and Order Act of 2010 (on July 29, 2010), IHS has made significant progress in developing policies and procedures regarding medical forensic services for victims of sexual abuse, as the acts required. IHS worked expeditiously to establish its first agencywide sexual assault policy within the 1-year deadline established by the Indian Health Care Improvement Act. The new policy, issued in March 2011, is an important and sound first step in what is planned to be a continuing effort to provide a standardized level of medical forensic services. As part of this effort, IHS has a number of important initiatives under way or under consideration, and events are unfolding rapidly. For example, in partnership with Justice, a new position was created in IHS headquarters for a sexual assault exam and response coordinator, and the position was filled in August 2011. Still, IHS faces a number of important challenges as it attempts to implement its new policy and continues to respond to incidents of sexual assault and domestic violence. These challenges include systemic issues—such as overcoming long travel distances and developing staffing models that overcome problems with staff burnout, high turnover, and compensation—so that standardized medical forensic services can be provided over the long term. Specifically, we found that hospitals face the following four challenges in standardizing and sustaining the provision of medical forensic services:  overcoming long travel distances;  establishing plans to help ensure that hospitals consistently implement and follow the March 2011 policy;  developing similar policies for domestic violence and child sexual  developing sustainable staffing models that overcome problems with staff burnout, high turnover, and compensation. In general, our work confirmed that IHS is aware of the challenges that it faces and either has initiatives under way to address them or is trying to formulate such initiatives. We found that long travel distances between IHS patient populations and hospitals—often across remote terrain with few, if any, roads—pose a barrier to access to a full range of medical services that an IHS beneficiary might need, including medical forensic services. Distances are of particular concern in Alaska, where sexual assault or domestic violence victims from remote Alaska Native villages must travel hundreds of miles to hospitals offering on-site medical forensic exams. Travel is typically possible only by airplane or snow machine; most villages are not accessible by road. (See fig. 6 for a picture of the ambulance used in one of the villages.) Further, victims must typically rely on law enforcement to arrange air transportation, and bad weather may delay flights for hours or days, according to stakeholders. Victims living in regions where the nearest hospital does not provide on-site medical forensic services must often undertake multistage trips to find access to these services. For example, medical providers told us that victims from remote villages near Kotzebue, where the hospital does not provide on-site medical forensic services, must take at least two flights to reach a hospital that does: a first flight from their village to Kotzebue and a second one from Kotzebue to Anchorage (see fig. 7). Great distances may also separate beneficiaries needing medical forensic services from hospitals providing these services in states other than Alaska. For instance, IHS hospitals in Arizona have contracted with an air ambulance provider to transport patients via helicopter or airplane to Phoenix for medical services, including medical forensic exams. Such trips can each cost IHS several thousand dollars, according to IHS officials. Medical providers, law enforcement, and prosecutors expressed concerns that long travel distances may deter victims from reporting sexual assault and domestic violence and delay collection of the medical forensic evidence needed for prosecution. They said that great distances may also discourage victims from reporting assaults to law enforcement and seeking medical forensic exams, particularly for victims from remote villages who may need to take two or more flights to obtain an exam. Also, victims in remote Alaska Native villages who wish to remain anonymous cannot do so because they generally rely on law enforcement for air transportation. Moreover, at least one stakeholder told us that travel delays due to bad weather may make it difficult to collect medical forensic evidence within the 72- to 96-hour time frame in which such evidence is considered most viable. According to stakeholders we spoke with, such long delays are rare, but any delay increases the chance that physical evidence will become contaminated or lost and that victims may forget details of the assault. To help address long travel distances, some hospitals and other stakeholders, such as law enforcement agencies, told us they are considering or have suggested expanding medical forensic services to clinics, either through telemedicine or by training additional medical providers, and expanding the role of community health aides, the primary medical providers in remote Alaska Native villages. Telemedicine technology uses video conference, remote monitoring equipment, and electronic health records to link patients in remote areas to medical providers located elsewhere. Telemedicine connects patients in remote clinics in Alaska to dental, skin, and other health care services and could be expanded to support treating victims of sexual assault, according to some stakeholders. One IHS hospital in Montana, for example, is considering using telemedicine to enable the hospital’s specially trained medical forensic examiners to consult on child sexual abuse cases—to determine if a specific injury is consistent with abuse, for example—with medical providers in remote clinics who do not have this specialized training. Before such a plan could be put in place, however, officials from the organization that develops telemedicine technology in Alaska told us, concerns would need to be addressed about how to securely store and transmit medical files to protect victim confidentiality and maintain the evidentiary chain of custody. Rather than use telemedicine, the IHS hospital located on the edge of a vast reservation is seeking to bring medical forensic services closer to its beneficiary populations by developing the capacity to perform medical forensic exams at a centrally located clinic, according to an IHS official. The hospital has identified clinic nurses who are interested in receiving specialized training in conducting the exams. A few stakeholders also suggested to us that community health aides could play a larger role in collecting and preserving medical forensic evidence. Medical providers and community health aides themselves, however, voiced concerns to us about such a proposal. In cases of sexual assault, health aides’ scope of practice and training are currently limited to tasks such as treating victims’ injuries and protecting evidence, such as clothing, until law enforcement officers arrive; health aides are not authorized to perform medical forensic exams or to collect evidence themselves. Among the concerns community health aide officials mentioned to us is that expecting health aides to perform such exams, on top of the many tasks already required of them, may increase burnout rates; they said that such an expectation may also put the health aides at risk of retaliation from alleged perpetrators or others in a village. Other suggestions made by stakeholders have included that health aides should receive additional training on the sexual assault response tasks that are already within their scope of practice. For example, medical providers told us that health aides in Alaska’s Yukon-Kuskokwim delta area attended training in 2010 designed to help health aides and law enforcement officers understand what health aides should and should not be expected to do when responding to sexual assault cases. The training focused on the actions health aides can already take to assist the response of law enforcement officers and hospitals in such cases, such as asking victims not to wash or change clothes before undergoing a medical forensic exam. Now that its initial sexual assault policy is in place, IHS faces the challenge of ensuring that its hospitals consistently implement the policy and follow its guidelines. IHS is taking initial steps to help hospitals implement the policy but has not yet developed written, comprehensive plans for implementation and monitoring. For example, IHS officials told us the agency is planning to use funding from the existing Domestic Violence Prevention Initiative to provide policy training to IHS hospitals and to expand specialized medical forensic training opportunities. IHS has also partnered with Justice’s Office for Victims of Crime to fund a national sexual assault exam and response coordinator position within IHS; the position—which was filled in August 2011—may play a role in helping implement and monitor the March 2011 policy. Nevertheless, IHS has not yet developed plans for implementing and monitoring the policy as a whole. Justice officials echoed these concerns, given most hospitals’ limited technical expertise in medical forensic exams and general lack of resources for responding to sexual assault. The Indian Health Care Improvement Act also requires IHS to report to Congress by September 23, 2011, on “the means and extent to which the Secretary has carried out” the act’s requirement to establish appropriate policies, among other things, for responding to victims of sexual abuse and domestic violence. Agency officials told us that at the time of this report, IHS had not yet identified sufficient resources for implementing the policy as a whole, nor had it developed time frames for implementing major objectives in the policy. Specifically, the agency had not identified resources for purchasing equipment and supplies, such as digital cameras and special forensic evidence-drying cabinets, required under the policy for hospitals providing on-site medical forensic exams. Furthermore, the agency has set December 31, 2012, as the deadline for medical providers to be “credentialed and privileged” as specially trained medical forensic examiners, but it has not identified deadlines IHS hospitals should meet in implementing other parts of the policy, such as providing access to medical forensic exams on site or by referral, or collaborating with the objective of creating sexual assault response teams. The agency has also not made plans to monitor whether IHS hospitals are following the policy, such as whether hospitals located more than 2 hours away from other facilities are developing the capability to provide on-site medical forensic exams or how well hospitals coordinate their activities with law enforcement and prosecutors. Coordination is important because it helps ensure that medical providers collect and preserve evidence in a way that is useful for prosecution. Our review found that hospitals’ coordination with law enforcement agencies and prosecutors varied greatly. Hospitals that do not coordinate regularly with law enforcement and prosecutors may unintentionally collect and preserve evidence in a way that hampers the investigation or prosecution of cases. For example, law enforcement officers in one location told us that before a candid meeting between medical providers and the prosecutor took place, providers were unknowingly violating the chain of custody to such a degree that the prosecutor could not reliably use their evidence for prosecution. The officers said that the meeting served as a catalyst for the medical providers to attend SANE training and for law enforcement officers, the prosecutor, and medical providers to develop a collaborative response to collecting and preserving evidence in sexual assault cases. Increased coordination between the hospital and law enforcement also led one hospital to install a locking cabinet (see fig. 8) to securely store collected medical forensic evidence before transferring it to law enforcement. Other medical providers told us they had not received feedback on medical forensic evidence collection and preservation from law enforcement officers or prosecutors. In one location, providers told us they kept completed exam kits with them at all times—even taking the kits home overnight—until law enforcement took possession of the kits, even though Justice officials told us that such practices could undermine the chain of custody. IHS’s March 2011 sexual assault policy calls on hospitals to coordinate with law enforcement and prosecutors, but Justice officials expressed concerns that many hospitals do not have working relationships with law enforcement and prosecutors that would enable such coordination. Furthermore, the policy does not specify how IHS headquarters will support its hospitals in building such relationships or initiating a coordinated response to sexual assault. According to an agency official, IHS did not have time to develop implementation and monitoring plans before the March 2011 deadline established for issuing a policy under the Indian Health Care Improvement Act. Furthermore, the agency did not seek comments from tribes before issuing the policy and therefore asked the tribes for feedback after releasing the policy. According to IHS officials, comments from tribes were due on May 30, 2011, and the agency was analyzing these comments and intending to issue a revised policy. One area of IHS’s March 2011 policy we found to have caused some confusion deals with guidelines for specialized training and certification for medical providers. The policy stipulates that nurses, physicians, and physician assistants must all complete specialized training in performing sexual assault medical forensic exams. The policy is unclear, however, about whether, to perform these exams, medical providers need to obtain documentation of competency beyond this training, especially for physicians and physician assistants. Sections 3.29.1 and 3.29.5 of the policy use the terms “credentialed” and “certified” interchangeably—in defining sexual assault nurse and forensic examiners, in delineating requirements for training and determining competency to perform these exams, and in describing how staff obtain privileges to perform these exams at IHS hospitals. These sections do so even though “credentialing” generally refers to an internal process for allowing medical providers to perform specific services in IHS hospitals, and “certification” is the term used by Justice in its sexual assault protocols and is also typically used by the organization that developed the SANE specialty to denote someone who has demonstrated competency in medical forensic exams and passed a required test. By using these terms interchangeably, the policy leaves unclear whether medical providers such as physicians and physician assistants must obtain specialized training and certification—or just training—before performing sexual assault medical forensic exams. IHS officials we spoke with provided conflicting interpretations of the policy, from interpreting it as calling for certification for sexual assault forensic examiners to calling only for training for these medical providers. IHS officials acknowledged, however, that no third-party certification exists for sexual assault forensic examiners in the same way it exists for nurses, which may imply that IHS would need to develop its own certification of sexual assault forensic examiners more broadly. IHS officials acknowledged to us that the agency has no plans to develop such a certification. Law enforcement officers and prosecutors told us that variable levels of specialized training among medical providers have sometimes led to inconsistencies in the quality and type of medical forensic evidence collected. Specifically, they said that compared with medical forensic exams performed by medical providers with specialized training, exams performed by medical providers without such training have been of lower quality or did not include certain pieces of evidence. A law enforcement officer and prosecutors told us that medical providers with SANE training were more familiar with procedures for collecting evidence and better able to document the intricacies of injuries and identify subtle signs of assault, such as small scratches and bruises, than medical providers who did not have specialized training. A law enforcement officer in one location told us about a child sexual abuse case in which a physician without specialized training found no evidence of abuse after performing a medical forensic exam; in contrast, a SANE-trained medical provider who performed a subsequent exam found internal injuries and other evidence of sexual abuse—evidence the physician without specialized training missed. Stakeholders also told us that because of their specialized training, SANE-trained medical providers understand the importance of identifying and collecting evidence consistent with a victim’s account of an assault, rather than simply following the generic step-by-step instructions in an evidence collection kit. For example, one victims’ advocacy group told us about a case in which a medical provider without specialized training collected only vaginal swabs from a victim when the assault actually involved anal rape—all because the medical provider did not ask the victim to describe the assault. No consensus exists on the specific threshold of specialized training needed to perform adequate exams; law enforcement officers and prosecutors we spoke with, however, generally agreed that some level of specialized training helps improve the quality of evidence collection. Without clear training and certification guidelines for physicians and physician assistants, medical forensic exams may continue to be performed by medical providers with inconsistent levels of knowledge and expertise. As a result, IHS beneficiaries cannot be assured of uniform quality in medical forensic services received, and law enforcement entities cannot count on uniform quality in the medical forensic evidence collected and preserved, even with IHS’s new sexual assault policy. Furthermore, calling for nurses to be SANE certified or physicians and physician assistants to be certified as sexual assault forensic examiners—if such a certification is developed—may be a difficult standard for hospitals to meet. Very few hospitals currently have nurses certified as SANEs, no comparable certification exists for physicians and physician assistants, and some medical providers we spoke with told us it can be challenging to complete the clinical training needed to be eligible for SANE certification. Some medical providers told us they are planning to complete their clinical training at another facility because their home hospital does not have a certified SANE provider who can validate their competency or does not see enough sexual assault cases to provide sufficient practical experience in performing medical forensic exams to demonstrate competency. Moreover, hospitals already face considerable challenges in attracting and retaining medical providers who are willing or able to perform the exams; calling for certification may unintentionally exacerbate this challenge, even though several stakeholders told us that it is the SANE training rather than the certification that is most important for performing high-quality medical forensic exams. In addition to the lack of clarity around training and certification guidelines for physicians and physician assistants under IHS’s new sexual assault policy, we have concerns that implementing and monitoring the policy’s overall training and certification guidelines may be challenging given IHS headquarters’ limited knowledge about how many of its medical providers have such training or certification. Without this baseline information, the agency may be unable to accurately allocate resources for training or identify IHS hospitals with certified SANE providers who can train or validate the competency of providers from other IHS hospitals. The agency also does not have a system in place to track providers’ progress toward meeting its training and certification guidelines. As a result, it may be unable to hold hospitals accountable for following this section of the policy. IHS’s March 2011 sexual assault policy instructs IHS hospitals to provide a standardized response to adult and adolescent victims of sexual assault. Specifically, the new policy calls for all IHS-operated hospitals to provide adult and adolescent patients who arrive in need of a medical forensic exam with access to an exam by a medical forensic examiner, either on site or by referral to a nearby facility. The new policy covers adult and adolescent victims of sexual assault, but it does not cover whether or how hospitals should respond to discrete incidents of domestic violence that do not include a sexual component or cover cases of child sexual abuse. Consequently, IHS hospitals do not have specific or recently updated guidance on whether to provide medical forensic services for victims of domestic violence and child sexual abuse; as a result, these victims may not have access to the full range of services they need. Agency officials told us that IHS is deciding how to provide direction on responding to incidents of domestic violence and child sexual abuse— whether through new policies or by updating existing sections of the Indian Health Manual—but that the agency does not have concrete plans to develop policies similar in scope and specificity to the March 2011 sexual assault policy. The Indian Health Care Improvement Act requires IHS to establish “appropriate protocols, policies, procedures, standards of practice . . . for victims of domestic violence and sexual abuse” and to develop appropriate victim services, including improvements to forensic examinations and evidence collection. According to an IHS official, the agency did not have time to develop a separate domestic violence policy before the Indian Health Care Improvement Act’s March 2011 deadline for establishing such a policy. In addition, the agency decided to limit the policy’s scope to adults and adolescents because Justice has not yet developed child sexual abuse protocols and recommended against including child sexual assault and adult sexual assault in the same protocol. Moreover, the Tribal Law and Order Act of 2010 directs IHS to base its sexual assault policies and protocols on those established by Justice. Therefore, the March 2011 policy does not address child sexual abuse. IHS officials also acknowledged that the sexual assault policy applies only to IHS-operated hospitals, not tribally operated hospitals. In accordance with the Indian Self-Determination and Education Assistance Act, the self- determination contracts and self-governance compacts under which tribes operate hospitals generally do not require compliance with IHS policy. An objective of the Indian Self-Determination and Education Assistance Act is to assure the maximum Indian participation in the direction of federal services to “Indian communities so as to render such services more responsive to the needs and desires of those communities.” Accordingly, tribes are accountable for managing day-to-day operations of IHS-funded programs, services, and activities included in their self- determination contract or self-governance compact. Tribes thereby accept the responsibility and accountability to beneficiaries under the contract with respect to use of the funds and the satisfactory performance of IHS programs, functions, services, and activities funded under their contract. At the same time, it is the policy of the Secretary of Health and Human Services to facilitate tribal efforts to plan, conduct, and administer programs, functions, services, and activities under the act. To that end, as requested, IHS may provide technical assistance to tribes in developing their capability to administer quality programs. According to IHS officials, tribally operated hospitals may choose to use IHS’s March 2011 policy as a model for developing their own sexual assault policies. IHS could negotiate contract or compact provisions requiring tribes to abide by IHS’s sexual assault policy, but the tribes would have to agree to such a provision. IHS officials told us the agency is hesitant to pursue this approach, and has not generally used it, because a multitude of other issues are also up for negotiation. Furthermore, IHS officials indicated that they do not plan to include such a provision in compacts or contracts the agency negotiates. Hospital officials told us they face challenges in designing staffing models for collecting and preserving medical forensic evidence that can overcome problems with staff burnout, high turnover, and compensation over time. In some hospitals where we conducted interviews, medical forensic services were not organized into a formal program or housed within a specific hospital department. Instead, several officials told us, medical forensic exams are performed by individual medical providers, sometimes from different departments, and often outside the medical providers’ official job duties and beyond their normal working hours. For example, at one hospital, officials told us that nurses from different units received specialized training in performing medical forensic exams and agreed to be on call to perform the exams day or night. Performing these exams was not written into the nurses’ formal job descriptions, however, and the nurses were expected to complete their official job duties, as well as medical forensic activities. Medical providers told us that burnout may occur for several reasons—including stress, lack of supervisor support, and inadequate compensation—stemming from staffing arrangements in which medical providers perform exams in addition to their official job duties. Potential burnout is a serious concern because it can undermine a hospital’s ability to sustain access to medical forensic services. IHS officials acknowledged that turnover rates for medical providers specially trained in performing medical forensic exams are generally very high, with such providers often leaving IHS facilities after only 2 years. Some medical providers told us they find it stressful to balance their normal job duties with providing medical forensic services. For example, in one hospital, several medical providers described the staffing arrangement for medical forensic exams as relying on nurses performing the work of two full-time jobs—their official jobs and their medical forensic exam duties—while receiving compensation only for their official jobs. In some hospitals, moreover, medical providers told us that their supervisors do not consistently allow them to participate in tasks outside of their normal duties. For example, medical providers told us about instances in which supervisors did not permit them to take time away from their normal duties to attend sexual assault response team meetings; as a result, the medical providers missed the meetings or worked beyond their normal hours to attend. In other cases, because of general hospital understaffing, some medical providers were unable to find backup coverage for their normal duties when called away for several hours to perform medical forensic exams. Consequently, some medical providers had to leave their normal duties unattended or have victims wait to receive exams until the medical providers’ normal shifts were over, which is stressful, according to at least one medical provider. In addition to issues related to understaffing, medical providers performing medical forensic exams over and above their normal duties said that they may not receive enough compensation to prevent attrition. The type and amount of compensation provided for performing medical forensic exams vary across hospitals, with some medical providers receiving overtime pay or compensatory time off and others receiving nothing beyond their normal salaries. Some medical providers told us they had trouble obtaining sufficient compensation. For example, medical providers in one hospital told us they receive compensatory time off for performing medical forensic exams, but they can rarely use the additional leave hours because the hospital is too short-staffed to approve time off. In another hospital, nurses who provided medical forensic exams in addition to their normal job duties found it difficult to obtain approval from their supervisors for overtime pay when performing the exams made them exceed their normal hours. The overtime rate the nurses said they were paid was commensurate to the nurses’ regular hourly rate, not the time and a half usually accorded for overtime. The former SANE coordinator at this hospital told us that such compensation challenges contributed to nurses’ burning out over time and ceasing their medical forensic exam duties. When the nurses stopped offering the exams, the hospital was unable to provide exams for victims who needed them and began referring victims to another facility, according to the coordinator. Concerning staffing, we have issued a guide federal agencies can use in maintaining or implementing effective internal control. One of the factors this guide states that agencies should consider in determining whether a positive control environment has been achieved concerns organizational structure and whether the agency has the appropria number of employees—specifically, so that employees do not have to work outside the ordinary workweek to complete their assigned tasks. Additionally, in its 2006-2011 Strategic Plan, IHS acknowledges the difficulty the agency has long faced in attracting and retaining medical providers across IHS. Attraction and retention is particularly challenging for remote facilities in isolated areas, where medical providers may be offered incentive pay for accepting positions. The agency’s strategic pla outlines strategies for recruiting, retaining, and developing employees, n stating that the agency will “ensure an ongoing process to identify a nd te implement the best practices related to staff retention” and “continue to explore options to provide adequate staffing for all facilities.” Some hospitals have already identified and implemented staffing options for medical forensic services, which aim to address concerns about provider burnout and sustainability. Several hospitals have incorporated medical forensic services into normal job duties for medical providers in a specific hospital department. For example, at one hospital in South Dakota, medical providers told us that most nurse midwives within the hospital’s midwife clinic receive SANE training and perform medical forensic exams as part of their normal clinic duties. In addition, several hospitals in Alaska have hired sexual assault response team coordinators, whose part- or full-time responsibilities are to manage the hospitals’ medical forensic services and perform medical forensic exams, according to hospital officials. An official at one hospital told us the hospital provided retention pay in an effort to adequately compensate medical providers for performing these exams. Such options may help reduce medical provider stress and burnout, but no single staffing arrangement works for all hospitals or medical providers. For example, medical providers from one hospital told us their hospital considered incorporating the exams into providers’ job descriptions but decided not to because doing so would make it even more difficult to attract candidates for already hard-to-fill positions. In addition, one stakeholder told us many hospitals do not see enough sexual assault cases to warrant a part- or full-time position for a sexual assault response team coordinator. Moreover, according to IHS officials, annual pay caps may limit the amount of bonus or retention pay that medical providers are eligible to receive for performing medical forensic exams. IHS is developing a proposal to separate the salary series of advanced practice nurses—the type of nurse likely to perform medical forensic exams within IHS—from other registered nurses so that advanced practice nurses can receive higher maximum pay. IHS officials told us this proposal may help address the constraints imposed by salary caps, which currently make it impractical for many nurses to be compensated for performing medical forensic exams. Decisions to prosecute sexual assault or domestic violence cases are based on the totality of evidence collected, one piece of which is medical forensic evidence collected by IHS and tribally operated hospitals. Many of the factors contributing to a decision to prosecute are not unique to incidents of sexual assault or domestic violence involving Indians in remote reservations or villages; nevertheless, prosecutors acknowledged, they affect the totality of the available evidence and thus contribute to decisions to prosecute such cases. Specifically, officials from the responsible law enforcement and prosecuting agencies told us they generally base their decisions to refer sexual assault or domestic violence investigations for possible prosecution and to accept these matters for prosecution on the total picture presented by the quality and quantity of available evidence. Prosecutors and law enforcement officials said they consider several factors—including medical forensic evidence collected by hospitals. They also said that the relative importance of these factors can differ from case to case. In some cases, medical forensic evidence may be a crucial factor; in others, however, it may not be relevant or available. For example, photographic evidence or DNA collected during a genital exam may be critical in showing that an alleged perpetrator had sex with the victim, but such medical forensic evidence may not be relevant when the victim and alleged perpetrator admit to having had sex but disagree as to whether the sex was consensual. In many of those cases where consent is the main issue, according to prosecutors and Justice’s sexual assault protocols, medical forensic evidence does not reveal physical injuries that readily demonstrate a lack of consent. Also, law enforcement officials and prosecutors told us that medical forensic evidence may be unavailable if a victim reports an assault weeks or months later, as often happens in cases of child sexual abuse, because, for example, DNA evidence or relevant fibers would likely have washed away or become contaminated in the meantime. In addition to this medical forensic evidence, law enforcement officials told us that when deciding whether to refer an investigation for possible prosecution, they consider several other factors, including quality of the criminal investigation conducted, credibility of witnesses who may have been intoxicated at the time of the assault, and coordination with relevant agencies to obtain supporting evidence. For example, federal prosecutors acknowledged that quality of the criminal investigation is important because evidence in a criminal matter must meet a relatively high threshold to be accepted for prosecution—that is, prosecutors must believe that existing evidence is compelling enough to demonstrate to a jury guilt beyond a reasonable doubt. As a result, prosecutors acknowledged that a law enforcement agency that refers all criminal investigations involving sexual assault for possible prosecution— regardless of whether the extent or quality of evidence collected during its investigation would warrant such a referral—may find that prosecutors decline to prosecute some of these matters. Law enforcement officials and prosecutors also told us that intoxication of witnesses at the time of an assault can mean these witnesses may be less credible in court because, for example, intoxication adversely affects ability to clearly recall circumstances around the assault or specific statements made by the victim or alleged perpetrator. Additionally, law enforcement officials and prosecutors stated that decisions to refer investigations for possible prosecution are also based on obtaining additional evidence that supports the victim’s account. Availability of coordinated efforts, such as sexual assault response teams, can greatly enhance the quality of a forensic interview with a victim about an assault and facilitate gathering such supporting evidence. Similarly, prosecutors consider additional factors besides medical forensic evidence when deciding whether to accept a matter for prosecution, including juries’ increased expectation of seeing DNA evidence; perceived credibility of the victim, alleged perpetrator, or other involved party; and availability of involved parties, such as witnesses or hospital providers, to testify. Specifically, several law enforcement officials and prosecutors stated that, in light of popular television series featuring forensic evidence, juries have come to expect prosecutors to regularly present DNA and other forensic evidence before they are willing to convict. As a result, several prosecutors told us they need to factor in such juror expectations when deciding whether they believe they have strong enough evidence to obtain a conviction or plea deal. Additionally, prosecutors told us that decisions to accept matters for prosecution are also based on how believable a witness, victim, or alleged perpetrator seems to be. The credibility of witnesses, including the victim, can be based on a variety of factors, including how well he or she can recall details of the assault. For example, one prosecutor told us her office concluded that the testimony of a particular victim could be persuasive because the woman accurately described the layout of the room where she alleged she was raped, even though the alleged perpetrator told police she had never been inside his house. Prosecutors across the country told us that intoxication of victims at the time of assault is not alone an acceptable reason to decline a matter for prosecution. With regard to witness testimony, federal and state prosecutors told us that availability of potential witnesses to testify is also an important factor. Some victims in small reservations or isolated villages may refuse to cooperate or may retract their initial statement, for example, because of pressure exerted on them by family or community members who may depend on the alleged perpetrator for necessities such as food or fuel. As a result, the victim may be unavailable to testify. Additionally, according to several prosecutors with whom we spoke, the availability to testify of medical providers who performed the associated medical forensic exams at IHS or tribally operated hospitals is an important factor because such testimony can help demonstrate that an assault occurred or help otherwise support a victim’s account of an assault. Specifically, some prosecutors told us that it may be difficult to locate traveling medical providers who work at these hospitals temporarily; in addition, hospital staffing shortages may keep supervisors from releasing staff from hospital duties to testify. Consequently, some medical forensic examiners at IHS and tribally operated hospitals may not be able to testify in court that evidence obtained from a medical forensic exam belongs to a given victim or attest to a victim’s statements made during the exam about the assault—testimony that prosecutors repeatedly stated is critical to using the medical forensic evidence in court. IHS officials noted, however, that the Tribal Law and Order Act of 2010’s requirement that state and tribal courts provide employees with 30-day notice of the request for testimony would make it much more likely that a traveling provider could be located and appear or a provider’s schedule changed to accommodate a court appearance. In this context, section 263 of the Tribal Law and Order Act of 2010 contains requirements for IHS regarding approval or disapproval of requests or subpoenas from tribal or state courts for employee testimony. IHS’s March 2011 sexual assault policy, however, is not entirely consistent with section 263, and, in some cases, the policy is not clear.  First, the policy does not state that subpoenas and requests for IHS employee testimony in tribal or state courts not approved or disapproved within 30 days are considered approved. In this regard, the policy appears to contradict section 263 of the act, which states that subpoenas or requests will be considered approved if IHS fails to approve or disapprove a subpoena or request 30 days after receiving notice of it.  Second, it is unclear whether the prior approval discussed in the policy refers to the agency’s approval of the subpoena, as required by the act, or supervisory approval of the employee’s release from hospital duties. To the extent that the policy’s discussion refers to release from hospital duties, the policy is silent about whether and under what circumstances supervisors can refuse to release a subpoenaed employee to testify if the subpoena or request is approved or considered approved.  Third, the policy does not specify criteria to be used to approve a subpoena. Specifically, the policy does not specify that, in accordance with section 263, the IHS Director must approve requests or subpoenas from tribal and state courts if they do not violate the Department of Health and Human Services’ policy to maintain impartiality. Explicitly articulating these criteria is important because departmental officials told us requests for IHS employee testimony in these criminal prosecutions would likely always satisfy the criteria and because responding to such requests are in the agency’s best interest. In addition, the policy does not discuss legal limitations placed by privacy laws on the production of medical records in response to state or tribal court subpoenas.  Fourth, the policy does not specify whether it also applies to subpoenas and requests from federal courts—a process currently governed by an unwritten policy—even though IHS officials told us they intended for the policy to cover federal subpoenas and requests as well as those from tribal and state courts. According to Health and Human Services officials, the department is drafting a more specific and comprehensive description of the subpoena approval process. As of September 2, 2011, however, this document, whose audience is officials involved in the subpoena approval process, had not been completed or disseminated; we have therefore not reviewed it. Moreover, it is unclear how widely it will be disseminated. We received inconsistent accounts from departmental and IHS officials about the extent to which the document will be made available to line staff—the very staff who would be subpoenaed to testify. According to federal standards for internal control, information should be recorded and communicated to management and others within an agency in a form and within a time frame that enables them to carry out their responsibilities. Moreover, the federal standards call for effective communication to flow down, across, and up the organization. Therefore, it is still uncertain when and by what processes IHS staff will be able to respond to subpoenas or testify in court about the medical forensic exams they conduct—an ambiguity in the policy that is of great concern, according to several Justice officials with whom we spoke. Medical providers in IHS and tribally operated hospitals are called upon to fulfill twin purposes when seeing patients who are victims of sexual assault and domestic violence—to treat the victim’s injuries and trauma and to collect medical forensic evidence of high enough quality that it can be used to prosecute crimes. The provision of medical forensic services and collection and preservation of high-quality evidence, however, are highly variable across IHS and tribally operated hospitals, hampered in part by distances victims must travel and the absence, until recently, of central direction from IHS on what, how, and by whom these services are to be provided. IHS has made significant progress in the last 2 years, and its March 2011 sexual assault policy takes a sound first step toward addressing problems like these, but the agency, its hospitals, and medical providers have a long way to go to fulfill the policy’s provisions. Without articulating how it plans to implement the policy and monitor progress toward meeting policy requirements, IHS may not be able to hold individual hospitals accountable to the agency, and the agency may not be able to hold itself accountable to its beneficiaries. The road ahead is likely to be particularly arduous for the more remote hospitals, which have long faced obstacles in attracting and retaining medical providers and are now faced with numerous new demands, such as offering medical forensic exams on site or by referral within 2 hours and making readily available digital cameras and other equipment and supplies needed to collect medical forensic evidence. In addition, responding to incidents of sexual assault and domestic violence requires a multifaceted approach involving not only medical providers but also law enforcement and prosecuting agencies and other stakeholders identified in the policy. The medical forensic evidence needs to be collected and preserved in a way that facilitates its use by law enforcement and prosecuting agencies. Not all IHS hospitals and staff regularly collaborate with these stakeholders or obtain regular feedback from them on evidence collection and preservation. Without considerable and concerted investment in the staff and hospitals responsible for providing medical forensic services—and without a detailed implementation plan to clarify how the agency will support its hospitals and staff in meeting the policy’s requirements and by when—the agency is unlikely to meet those requirements. In addition, IHS’s March 2011 sexual assault policy does not address how its hospitals should respond in cases of discrete domestic violence without a sexual component or in cases of child sexual abuse. IHS is currently considering how its hospitals should respond to such cases, but it has not developed policies that are similar in scope and specificity to its March 2011 sexual assault policy for adolescents and adults. This gap is significant, but IHS is only one of the agencies involved in the multifaceted response to incidents of sexual assault and domestic violence. All the responding federal agencies should present a consistent and coordinated response to these issues. Justice also has not yet developed a policy for responding to child sexual abuse incidents, which is critical, since the Tribal Law and Order Act of 2010 mandates that IHS develop standardized sexual assault policies and protocols based on a similar protocol established by Justice. IHS’s recent effort to solicit and analyze comments from the tribes and Justice on the March 2011 policy presents an opportunity for the agency to revise areas that, as originally written, are unclear or inconsistent. Specifically, it is unclear whether sections 3.29.1 and 3.29.5 of the policy require both training and certification, or only training, of IHS physicians and physician assistants performing sexual assault medical forensic exams. Also, the policy does not specify how physicians and physician assistants are to attain certification when no such certification by IHS or a third party exists for medical providers other than nurses. IHS’s sexual assault policy is also not consistent with provisions in section 263 of the Tribal Law and Order Act of 2010, which states, among other provisions, that subpoenas and requests for employee testimony or documents from state and tribal courts not approved or disapproved within 30 days are considered approved. To the extent that the policy’s discussion of subpoena and request approvals refers to release from hospital duties, the policy is silent about whether and how IHS plans to approve the release of staff providing medical forensic exams to testify or otherwise comply with subpoena requests. Without greater clarity in the policy’s language—and without giving relevant staff explicit guidance on how to respond when subpoenaed or requested to testify—providers who perform sexual assault medical forensic exams may not understand the circumstances under which they are allowed or required to testify in court, a serious concern that Justice has echoed. Some of the prior efforts to provide medical forensic services at individual hospitals failed for various reasons, including staffing problems related to burnout, high turnover, and compensation. The March 2011 sexual assault policy provides the high-level management endorsement that had been missing in the past, but devising appropriate staffing models—so that the provision of standardized medical forensic services being developed under the new policy will continue well into the future—remains a challenge. At some locations, current staffing models present disincentives to the provision of these services, such as supervisory refusal to give medical providers permission to attend sexual assault team meetings or to approve adequate compensation for providing medical forensic services in addition to normal job duties or beyond a unit’s official area of responsibility. Given the agency’s reliance on temporary medical providers, as well as high burnout and turnover rates among medical providers, unless corrected, such disincentives are likely to undermine IHS’s efforts to fulfill the March 2011 policy’s goals over the long term. Finally, IHS also has an opportunity to incorporate comments from tribes that may choose to use the March 2011 policy as a model on which to base their own sexual assault response policies in tribally operated hospitals or clinics. As we discussed earlier, IHS policies and procedures can be used as models on which to base local tribal protocols even though they do not generally apply to its 17 tribally operated facilities. In addition, IHS recognizes that hospital protocols, particularly for complex and sensitive matters like sexual assault, need to reflect each community’s individual circumstances. Coordinating with tribes may therefore be especially important to those tribally operated hospitals in Alaska, where the state, rather than the federal government, generally has criminal jurisdiction and where the state has made combating sexual assault and domestic violence a high priority. To improve or expand medical forensic exams and related activities for the 28 IHS operated hospitals, we recommend that the Secretary of Health and Human Services direct the Director of the Indian Health Service to take the following five actions:  Develop an implementation plan for the March 2011 IHS sexual assault policy (Indian Health Manual, chapter 3.29)—and monitor its progress—to clarify how the agency will support its hospitals and staff in fulfilling the policy, in particular, that the hospitals or staff:  obtain training and certification in providing forensic medical  obtain equipment like cameras needed to collect evidence;  provide medical forensic exams on site or at a referral facility within 2 hours of a patient’s arrival; and collaborate with law enforcement agencies, prosecution, and other stakeholders identified in the policy with the objective of creating sexual assault response teams and obtaining regular feedback from such stakeholders on evidence collection and preservation.  Develop a policy that details how IHS should respond to discrete incidents of domestic violence without a sexual component and, working with Justice, develop a policy for responding to incidents of child sexual abuse consistent with protocols Justice develops for these incidents; such policies should be similar in scope and specificity to the March 2011 IHS policy on responding to adult and adolescent sexual assaults.  Clarify whether sections 3.29.1 and 3.29.5 of the March 2011 IHS sexual assault policy call for training and certification, or only training, of IHS physicians and physician assistants performing sexual assault medical forensic exams.  Modify the March 2011 IHS sexual assault policy so that it comprehensively and clearly outlines (1) the process for approving subpoenas and requests for IHS employees to provide testimony in federal, state, and tribal courts and (2) reflects the provisions in section 263 of the Tribal Law and Order Act of 2010, including that subpoenas and requests not approved or disapproved within 30 days are considered approved.  Explore ways to structure medical forensic activities within IHS facilities so that these activities come under an individual’s normal duties or unit’s official area of responsibility, in part to ensure that providers are compensated for performing medical forensic services. We provided a copy of our draft report to the Departments of Health and Human Services, the Interior, and Justice and to the state of Alaska. In its written response, reprinted in appendix IV, the Department of Health and Human Services agreed with our five recommendations and stated that work is now under way to implement each of them. The state of Alaska generally agreed with our conclusions and recommendations, especially the recommendation to develop additional policies specific to child sexual abuse, and expressed its willingness to collaborate with the Indian Health Service in developing sexual assault policies applicable to Alaska (see app. V). The Department of Health and Human Services and the state of Alaska, as well as the Departments of the Interior and Justice, provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Secretary of the Interior, the Attorney General of the United States, the Governor of Alaska, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to determine (1) the ability of Indian Health Service (IHS) and tribally operated hospitals to collect and preserve medical forensic evidence for use in criminal prosecution in sexual assault and domestic violence cases; (2) what challenges, if any, these hospitals face in collecting and preserving such evidence, particularly in remote Indian reservations and Alaska Native villages; and (3) what factors besides medical forensic evidence collected by these hospitals contribute to a decision to prosecute such cases. For all three objectives, we collected and analyzed laws, regulations, and agency policies relevant to the collection and preservation of medical forensic evidence by IHS and tribally operated hospitals in cases of sexual assault and domestic violence, and we interviewed and gathered relevant documentation from headquarters officials at IHS, the Bureau of Indian Affairs, the Department of Justice, and the state of Alaska. In addition, we conducted over 60 semistructured interviews with several groups of stakeholders (1) from hospital staff during site visits to a nonprobability sample of 8 IHS or tribally operated hospitals in Alaska, Arizona, and South Dakota and over the telephone with an additional nonprobability sample of 7 IHS or tribally operated hospitals in Arizona, Minnesota, Montana, New Mexico, North Dakota, and Oklahoma and (2) from victim advocacy groups; federal and state prosecutors; and federal, state, local, and tribal law enforcement agencies that play a role in responding to and prosecuting sexual assault and domestic violence cases in most of the locations these 15 hospitals serve. We spoke with officials about hospitals that are performing medical forensic exams, that are developing the ability to perform such exams, and that do not perform these exams. To determine the ability of IHS and tribally operated hospitals to collect and preserve medical forensic evidence, we surveyed all 45 IHS and tribally operated hospitals on available services, obtained electronic data from IHS on procedures and purpose of visits related to sexual assaults and domestic violence, and determined which hospitals were located in remote areas.  First, we determined the type of facility within the IHS system that is most likely to provide medical forensic services. From discussions with IHS officials and others, we found that hospitals were the most appropriate type of facility to include in our analysis because of the level of medical expertise and infrastructure available in these facilities relative to other types of health centers or specialized clinics. We then obtained an electronic list of all IHS and tribally operated hospitals in the United States, including location and contact information for each. We assessed the reliability of this list by validating and cross-checking the data with the IHS official who oversees the information. After eliminating two private hospitals that were erroneously included in the list, we determined that the data were sufficiently reliable for the purpose of this report. Using this list of 45 IHS and tribally operated hospitals, we e-mailed a self- administered questionnaire to survey each of the 45 hospitals. (See app. II for a blank copy of the questionnaire.) The questions were designed to identify the ability of each hospital to collect and preserve medical forensic evidence at the time the questions were answered. To develop the survey questions, we reviewed existing interviews, interviewed IHS officials and providers at several IHS and tribally operated hospitals, and reviewed relevant Justice protocols. We took steps to minimize errors in the survey effort’s development and data collection process. For example, the team designed specific questions in consultation with a social science survey specialist and design methodologist. We conducted several pretests with medical providers at three separate hospitals—two IHS-operated hospitals and one tribally operated hospital—to help ensure that the questions were clear, relevant, and unbiased and to ensure that they could be completed quickly. Another survey specialist also reviewed the questionnaire, and suggestions were included where appropriate. We sent the questionnaire to the most knowledgeable hospital official at each location—typically the clinical director and chief executive officer—to be the lead respondent and, if necessary, to confer with other representatives within the hospital to answer questions requiring more detailed knowledge. To maximize our response rate, we sent follow-up e-mails and left reminder telephone messages over a period of approximately 11 weeks—from March 31, 2011, when we started the survey effort, through June 14, 2011, when we closed it. We received responses from 100 percent of the hospitals, and we followed up to clarify specific responses as needed. Accordingly, the responses represent a snapshot in time of each hospital’s medical forensic services. We entered the responses into a spreadsheet and analyzed the results. A separate analyst verified the accuracy of data entry and analyses. (See app. III for a summary of key survey results.)  Second, we obtained electronic data on the reasons for hospital visits by IHS beneficiaries from fiscal year 2006 through fiscal year 2010 for each of the 45 hospitals that report such data to IHS. Two hospitals—Sage Memorial Hospital in Ganado, Arizona, and Norton Sound Regional Hospital in Nome, Alaska—do not use IHS’s comprehensive health information system, called the Resource Patient Management Information System, but a different electronic health records system. We were therefore unable to assess the reliability of their data or to use their data in any analysis. commuting area codes—developed on the basis of U.S. Census tracts by the Department of Agriculture’s Economic Research Service—because IHS has no technical definitions for remote. The rural-urban commuting area system defines remote areas as those with dispersed and small populations and where travel times are longer because of limitations in transportation infrastructure, and it defines urban areas as those with large populations and short travel times between cities. We linked a hospital’s zip code to rural-urban commuting area data—also broken out by zip code—to determine if a hospital is located in an isolated, small rural, large rural, or urban area, as classified by the rural-urban commuting area system. We refined these four categories into a two-category classification scheme—collapsing the “isolated” and “small rural” categories into one remote category and collapsing the “urban” and “large rural” categories into one urban category—to aid in analysis and better respond to our objectives. To determine the challenges faced by these hospitals in collecting and preserving medical forensic evidence, particularly in remote Indian reservations and Alaska Native villages, we also collected and analyzed pertinent laws, regulations, policies, protocols, and reports from IHS, Justice, and other entities. On the basis of initial interviews and responses from our survey of hospitals, we selected a nonprobability sample of IHS and tribally operated hospitals with which to conduct semistructured interviews on challenges they face in collecting and preserving medical forensic evidence. We chose 15 hospitals according to a series of selection criteria that included geographic location, remoteness, whether the state or federal government had criminal jurisdiction in Indian country served by the hospital, and whether the hospital was IHS or tribally operated. Additionally, because we used a nonprobability sample to select these IHS and tribally operated hospitals to interview, the information we gathered in our semistructured interviews cannot be generalized to all hospitals and instead represents the perspectives only of these hospitals’ providers and stakeholders. We also interviewed many victim advocacy groups, federal and state prosecutors, and federal and state and local law enforcement agencies that play a role in responding to and prosecuting sexual assault and domestic violence cases in most of the locations these 15 hospitals serve. We reviewed and analyzed our interviews and supporting documentation to identify systemic and regionally specific challenges. Finally, to identify additional factors that federal prosecutors may consider when determining whether to prosecute cases of sexual assault and domestic violence, we reviewed relevant studies about these crimes and reviewed standards related to decisions by law enforcement to refer, or decisions by prosecutors to accept, a matter for criminal prosecution. We conducted this performance audit from October 2010 through October 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This questionnaire asks for information about medical forensic examinations done in cases of sexual assault or domestic violence for adults and/or children; and information on whether or not your facility has, or ever had, a program offering such medical forensic examination services. The U.S. Government Accountability Office (GAO) is an agency that assists the U.S. Congress in evaluating federal programs. We have been asked to provide Congress with information about the capability of Indian Health Service (IHS) to collect and preserve evidence in cases of sexual assault/abuse and domestic violence (involving adults or children) for criminal prosecution. The intent of this questionnaire is to determine which IHS and tribal hospitals have medical forensic examiner programs or provide the services of a medical forensic examiner in cases of sexual assault and domestic violence (involving adults and/or children). For the purposes of this questionnaire, the medical forensic examination is the medical treatment of a patient as well as the collection of forensic evidence. Specifically, the forensic component could include performing a forensic evidence collection kit sometimes referred to as a “rape kit”, gathering a medical forensic history, conducting an exam, documenting biological and physical findings, and collecting evidence from the patient. We recognize that there is a continuum of forensic evidence collection services that can occur depending on the availability of staff and the medical condition of the victim. Your facility was selected because it is one of the 47 hospitals operated by IHS, a tribe, consortium, or has a contract to provide services. It should take you about 5 to 10 minutes to complete this questionnaire. The person with the most knowledge of the forensic examination program should complete this questionnaire for the entire facility. If you feel you are not the most knowledgeable person in your facility about these exams, please contact Kyle Stetler (contact information below) and let him know who you feel would be the best person to complete it and we will arrange to send it to that person. Your cooperation is critical to providing the Congress complete and balanced information about the capability of IHS to collect and preserve evidence in cases of sexual assault/abuse and domestic violence. Completing and Returning the Questionnaire Please complete and return this questionnaire as soon as possible, but no later than Thursday, April 7, 2011. After receiving your responses, we may also want to follow up with some of you by telephone to better understand your program or how you operate in lieu of a program. To answer the questions, first open the attached MS Word file and save the file to your computer. Then enter your responses directly to the saved document following the instructions below. Once the questions are completed, please return them by attaching the saved document to an e-mail message to Stetlerk@gao.gov. Or mail to 701 5th Ave., Suite 2700, Seattle WA. 98104. Instructions for Completing the Questions Onscreen Please use your mouse to navigate, clicking on the field or check box To select a check box or a button, click on the center of the box. To change or deselect a check box response, click on the check box and the ‘X’ will disappear. To answer a question that requires that you write a comment, click on the answer box _____ and you wish to answer. begin typing. The box will expand to accommodate your answer. You are not limited to the amount of space you see on the screen. If you have additional clarifications or comments on any of the questions, please include those in the comment box at the end of this document or in a separate document. Title: Facility/Program Name: SECTION A. ADULT VICTIMS OF SEXUAL ASSAULT 1. Currently, if an adult victim of sexual assault comes into your facility, with what frequency does your facility conduct a medical forensic examination, that is, the medical treatment of a patient as well as the collection of forensic evidence? (Specifically, the forensic component could include such things as performing a forensic evidence collection kit sometime referred to as a “rape kit”, gathering a medical forensic history, conducting an exam, documenting biological and physical findings, and collecting evidence from the patient.) Typically or always conducts ……….. Sometimes conducts ………………… Rarely conducts …………………...… Never conducts ……………………… 2. If the frequency with which your facility conducts these medical forensic examinations has substantially changed in the last five years, please describe below. The box will expand to fit your answer. NOTE: If you answered “Never conducts” to Question 1, please skip to Question 7 3. If your facility conducts medical forensic examinations in cases of adult sexual assault, which types of providers typically conduct medical forensic examinations? For each row, please check all that apply. Do not have this type of b. Physician’s Assistant e. Other (Specify below) 4. If your facility conducts medical forensic examinations in cases of adult sexual assault, what is the level of training of the providers who typically conduct these examinations? For each row, please check all that apply. No providers of this type have specific forensic training or do not have this type of provider b. Physician’s Assistant c. Nurse Practitioner / Advanced e. Other (Specify below) 5. Has there ever been an extended period of time, during the last 5 years, when there was no one available to conduct the medical forensic examinations for adult victims of sexual assault? Yes………….…………… No….……………………. Î SKIP TO QUESTION #7 6. If yes, please describe the circumstances. The boxes will expand to fit your answer. 7. Does your facility (ever) refer adult sexual assault patients someplace else for medical forensic examinations? Yes………….…………… No….……………………. Î SKIP TO QUESTION #9 8. If checked “Yes,” please specify where and under what circumstances. SECTION B. ADULT VICTIMS OF DOMESTIC VIOLENCE 9. If an adult victim of domestic violence comes into your facility, with what frequency does your facility conduct a medical forensic examination, that is, the medical treatment of a patient as well as the collection of forensic evidence? Typically or always conducts ……….. Sometimes conducts ………………… Rarely conducts …………………...… Never conducts ……………………… 10. If the frequency with which your facility conducts these medical forensic examinations has substantially changed in the last five years, please describe below. The box will expand to fit your answer. NOTE: If you answered “Never conducts” to Question 9, please skip to Question 15 11. If your facility conducts medical forensic examinations in cases of adult domestic violence, which types of providers typically conduct medical forensic examinations? For each row, please check all that apply. Do not have this type of below) 12. If your facility conducts medical forensic examinations in cases of adult domestic violence, what is the level of training of the providers who typically conduct these examinations? For each row, please check all that apply. No providers of this type have specific forensic training or do not have this type of provider b. Physician’s Assistant e. Other (Specify below) 13. Has there ever been an extended period of time, during the last 5 years, when there was no one available to conduct the medical forensic examinations for adult victims of domestic violence? Yes………….…………… No….……………………. Î SKIP TO QUESTION #15 14. If yes, please describe the circumstances. 15. Does your facility (ever) refer adult domestic violence patients someplace else for medical forensic examinations? Yes………….…………… No….……………………. Î SKIP TO QUESTION #17 16. If you checked “Yes,” please specify where and under what circumstances. SECTION C. CHILD VICTIMS OF SEXUAL ABUSE 17. If a child victim of sexual abuse comes into your facility, with what frequency does your facility conduct a medical forensic examination, that is, the medical treatment of a patient as well as the collection of forensic evidence? Typically or always conducts ……….. Sometimes conducts ………………… Rarely conducts …………………...… Never conducts ……………………… 18. If the frequency with which your facility conducts these medical forensic examinations has substantially changed in the last five years, please describe below. The box will expand to fit your answer. NOTE: If you answered “Never conducts” to Question 17, please skip to Question 23 19. If your facility conducts medical forensic examinations in cases of child sexual abuse, which types of providers typically conduct medical forensic examinations? For each row, please check all that apply. Do not have this type of b. Physician’s Assistant f. Other (Specify below) 20. If your facility conducts medical forensic examinations in cases of child sexual abuse, what is the level of training of the providers who typically conduct these examinations? For each row, please check all that apply. No providers of this type have specific forensic training or do not have this type of provider b. Physician’s Assistant c. Pediatric Nurse Practitioner / f. Other (Specify below) 21. Has there ever been an extended period of time, during the last 5 years, when there was no one available to conduct the medical forensic examinations for child victims of sexual abuse? Yes………….…………… No….……………………. Î SKIP TO QUESTION #23 22. If yes, please describe the circumstances. 23. Does your facility (ever) refer child sexual abuse patients someplace else for medical forensic examinations? Yes………….…………… No….……………………. Î SKIP TO QUESTION #25 24. If you checked “Yes,” please specify where and under what circumstances. SECTION D. CHILD VICTIMS OF PHYSICAL ABUSE 25. If a child victim of physical abuse comes into your facility, with what frequency does your facility conduct a medical forensic examination, that is, the medical treatment of a patient as well as the collection of forensic evidence? Typically or always conducts ……….. Sometimes conducts ………………… Rarely conducts …………………...… Never conducts ……………………… 26. If the frequency with which your facility conducts these medical forensic examinations has substantially changed in the last five years, please describe below. The boxes will expand to fit your answer. NOTE: If you answered “Never conducts” to Question 25, please skip to Question 31 27. If your facility conducts medical forensic examinations in cases of child physical abuse, which types of providers typically conduct medical forensic examinations? For each row, please check all that apply. Do not have this type of b. Physician’s Assistant Practitioner / Advanced Practice Nurse f. Other (Specify below) 28. If your facility conducts medical forensic examinations in cases of child physical abuse, what is the level of training of the providers who typically conduct these examinations? For each row, please check all that apply. No providers of this type have specific forensic training or do not have this type of provider b. Physician’s Assistant c. Pediatric Nurse Practitioner / f. Other (Specify below) 29. Has there ever been an extended period of time, during the last 5 years, when there was no one available to conduct the medical forensic examinations for child victims of physical abuse? Yes………….…………… No….……………………. Î SKIP TO QUESTION #31 30. If yes, please describe the circumstances. 31. Does your facility (ever) refer child physical abuse patients someplace else for medical forensic examinations? Yes………….…………… No….……………………. Î SKIP TO QUESTION #33 32. If you checked “Yes,” please specify where and under what circumstances. 33. Does your facility have the capacity to perform medical forensic examinations for adult or child victims of sexual assault and/or domestic violence 24 hours a day, 7 days a week? Yes………….…………… No….……………………. No program……………... Î Please skip to Question 36 34. What are the current days and hours of operation for your medical forensic examiner staff or program that treats adult or child victims of sexual assault and/or domestic violence? Please describe in the box below if the hours are different for children or adults. Please indicate time in 24-hour clock format. If you are not open/available during one or more time slots, please type N/A in that time slot. To To To To To To To 35. Please describe, if applicable, other provider/staff availability for children or adults. 36. Are there any (other) IHS or tribal clinics in your service area offering medical forensic examinations to child or adult victims of sexual assault or domestic violence? Yes………….…………… No….……………………. Don’t know...……………. Î Please skip to Question 38 Î Please skip to Question 38 37. If there are other IHS or tribal clinics in your service area to whom you may refer medical forensic examinations for child or adult victims of sexual assault or domestic violence, what are the names of the clinics and their contact information, to the extent it is available (please provide for up to 3 clinics): 38. Is there any additional information that you would like to provide in regards to medical forensic examinations? Thank you very much for your participation! Please save your responses before exiting and return the questionnaire by attaching the document to an e- mail message to StetlerK@gao.gov. Legend: ■ = Typically performs; ○ = Does not typically perform (i.e., never, rarely, or sometimes performs medical forensic exams) On follow-up with San Carlos Hospital, we found that it does not typically perform medical forensic exams for adults, although its survey response said it did perform such exams. Therefore, the number of hospitals typically performing exams changed from a reported value of 27 to an actual value of 26 in our report. In addition to the individual contact named above, Jeffery D. Malcolm (Assistant Director), Ellen W. Chu, Katherine Killebrew, Ruben Montes de Oca, Kim Raheb, Kelly Rubin, Jeanette M. Soares, Kyle Stetler, Shana B. Wallace, and Tama R. Weinberg made key contributions to this report.
The Justice Department has reported that Indians are at least twice as likely to be raped or sexually assaulted as all other races in the United States. Indians living in remote areas may be days away from health care facilities providing medical forensic exams, which collect evidence related to an assault for use in criminal prosecution. The principal health care provider for Indians, which operates or funds tribes to operate 45 hospitals, is the Department of Health and Human Services' Indian Health Service (IHS). In response to a Tribal Law and Order Act of 2010 mandate, GAO examined (1) the ability of IHS and tribally operated hospitals to collect and preserve medical forensic evidence involving cases of sexual assault and domestic violence, as needed for criminal prosecution; (2) what challenges, if any, these hospitals face in collecting and preserving such evidence; and (3) what factors besides medical forensic evidence contribute to a decision to prosecute such cases. GAO surveyed all 45 IHS and tribally operated hospitals and interviewed IHS and law enforcement officials and prosecutors.. GAO's survey of IHS and tribally operated hospitals showed that the ability of these hospitals to collect and preserve medical forensic evidence in cases of sexual assault and domestic violence--that is, to offer medical forensic services--varies from hospital to hospital. Of the 45 hospitals, 26 reported that they are typically able to perform medical forensic exams on site for victims of sexual assault on site, while 19 reported that they choose to refer sexual assault victims to other facilities. The hospitals that provided services began to do so generally in response to an unmet need, not because of direction from IHS headquarters, according to hospital officials. Partly as a result, levels of available services have fluctuated over time. GAO found that the utility of medical forensic evidence in any subsequent criminal prosecution depends on hospital staff's properly preserving an evidentiary chain of custody, which depends largely on coordinating with law enforcement agencies. IHS has made significant progress since 2010 in developing required policies and procedures on medical forensic services for victims of sexual assault; nevertheless, challenges in standardizing and sustaining the provision of such services remain. In March 2011, IHS took a sound first step in what is planned to be an ongoing effort to standardize medical forensic services by issuing its first agencywide policy on how hospitals should respond to adult and adolescent victims of sexual assault. Remaining challenges include systemic issues such as overcoming long travel distances between Indian reservations or Alaska Native villages and IHS or tribal hospitals and developing staffing models that overcome problems with staff burnout, high turnover, and compensation, so that standardized medical forensic services can be provided over the long term. In addition, other challenges include establishing plans to help ensure that IHS hospitals consistently implement and follow the March 2011 policy, such as with training guidelines, and developing policies on how IHS hospitals should respond to domestic violence incidents and sexual abuse involving children who have not yet reached adolescence--neither of which is included in the March 2011 policy. GAO found that IHS is aware of these challenges and has initiatives under way or under consideration to address them. Decisions to prosecute sexual assault or domestic violence cases are based on the totality of evidence, one piece of which is medical forensic evidence collected by hospitals. In some cases, medical forensic evidence may be a crucial factor; in other cases, however, it may not be relevant or available. Law enforcement officers and prosecutors said that they also consider several other factors when deciding to refer or accept a case for prosecution. For example, some victims in small reservations or isolated villages may refuse to cooperate or may retract their initial statements because of pressure from community members who may depend on the alleged perpetrator for necessities. As a result, the victim may be unavailable to testify. Several prosecutors also told us that the availability to testify of the providers who perform medical forensic exams is an important factor, because such testimony can help demonstrate that an assault occurred or otherwise support a victim's account. IHS's March 2011 policy, however, does not clearly and comprehensively articulate the agency's processes for responding to subpoenas or requests for employee testimony. GAO is making five recommendations aimed at improving IHS's response to sexual assault and domestic violence, including to develop an implementation and monitoring plan for its new sexual assault policy and to modify sections of the policy regarding required training and subpoenas or requests to testify. The Department of Health and Human Services and the state of Alaska generally agreed with GAO's findings and recommendations.
In 2000, the Council of State Governments reported that more than 40 states offered tax and financial incentives to businesses for activities such as relocating, expanding, buying equipment, or creating and maintaining jobs. The use of incentives to attract and retain businesses has been an issue of debate for many years. Proponents maintain that economic development incentives are an effective means by which states and communities can compete for jobs. Opponents contend that the dollars spent to provide incentives would be better used to support activities believed to have more impact on a community’s economic development, such as improvements to infrastructure and investments in education to develop a competitive labor pool. While states and localities compete with one another to attract businesses, some states and localities have attempted to curtail the use of economic development funds to relocate jobs. According to two policy groups promoting accountability in economic development, three cities—Austin, Texas; Gary, Indiana; and Vacaville, California—and nine states— Alabama, Connecticut, Florida, Iowa, Maryland, New Mexico, New York, Ohio, and Wisconsin—prohibit using city and state resources, respectively, to relocate jobs within their boundaries. For example, both policy groups state that the Gary, Indiana, city ordinance prohibits tax abatements for the relocation of existing jobs from outside the corporate limits of the city. One of the groups also said that in Puerto Rico, the governor may refuse any business application for tax incentives if doing so would adversely affect the business’ employees in any state in the United States. Regional entities also have established formal and informal agreements to curtail the competition for businesses and jobs within their boundaries. These entities include the Metro Denver Economic Development Corporation; the tri-county region comprising Broward, Miami-Dade and Palm Beach counties in Florida; and Contra Costa and Alameda counties in California. In 2006, the total number of unemployed workers was 6.8 million in the fourth quarter, compared to 145.6 million employed. According to the Bureau of Labor Statistics (BLS), employers reported that a total of 894,739 workers lost their jobs because of extended layoffs in 2006 that resulted from a variety of economic factors, such as bankruptcy and reorganizations. A BLS survey of employers found that 20,199 of these losses (about 2 percent) occurred because of business relocations within the United States, the majority across state lines. Another source—the National Establishment Time Series (NETS)—uses proprietary Dun & Bradstreet data on U.S. companies to track business relocations. According to a representative of the company that maintains the NETS data, more than 2.8 million businesses have relocated since 1990 and about 100,000 of these (or almost 4 percent) occurred across state lines. A number of federal programs fund or support economic development activities. In prior work, we identified activities that are directly related to economic development—planning economic development activities; constructing or renovating nonresidential buildings; establishing business incubators; constructing industrial parks; constructing and repairing roads and streets; and constructing water and sewer systems. These programs typically are available to applicants that include individuals; local, state, territorial, and tribal governments; and nonprofit organizations through loans, loan guarantees, and project and formula grants. Appendix II provides a description of the nine federal economic development programs that we identified as having nonrelocation provisions, including information about program funding and how the programs operate. We identified 17 large federal programs that state and local governments can use to attract businesses. These programs offer assistance to businesses in the form of loans and loan guarantees, grants, job-training services, and tax benefits as incentives to businesses. Of the 17 economic development programs, states appear to have marketed 14 as incentives for businesses. However, according to academic experts who study economic development incentives and site-selection consultants, the amount of federal funds used as incentives is likely more limited than the amount of state and local funds used as incentives. State and local governments have varying discretion over the use of the federal funds, but can leverage federal funds to free their own resources for incentives or for other purposes that support businesses. Finally, academic studies on incentives and site-selection consultants have questioned whether incentives offered by state and local governments influence a business’ decision to relocate or expand operations. We identified 17 large federal economic development programs that state and local governments can use as incentives to attract and retain businesses, based on a search of the CFDA database, Tax Expenditure Compendium, and state economic development Web sites. As shown in table 1, five agencies administer the 17 programs, which offer a range of assistance or services (such as loans, grants, tax benefits, and training programs) to businesses. Out of the 17 programs we identified, five were direct loan or loan guarantee programs: the SBA 7(a) and 504, USDA B&I, Farm Ownership Loans, and Farm Operating Loans; four were tax incentive programs: IRS’s New Markets Tax Credit, its two private activity bond programs, and HUD’s Renewal Communities; three were programs that support job training services: WIA Adult, Dislocated Workers, and Youth programs; and five were programs that offer more than one type of financial assistance (grants, direct or guaranteed loans, or tax incentives): the two HUD CDBG programs, HUD EZ, USDA EZ/EC, and USDA Community Facilities. State and local governments also can use federal economic development resources to supplement their existing resources to attract additional investment and potentially use federal economic development funds to free up money for incentives they otherwise would have spent on economic development. For example, according to USDA officials, EZs and ECs often leverage federal program resources to obtain other funds, thereby attracting businesses. Similarly, businesses located in EZs and ECs can claim various state and federal tax credits, including IRS’s Work Opportunity Tax Credit, which provides tax credits to employers hiring individuals residing in an EZ or EC. According to our January 2007 report on the New Markets Tax Credit program, these credits can be packaged with other types of incentives, such as EZ/EC incentives or state and local tax abatements, to make the investments in economically distressed communities more attractive to investors such as banks. We previously have reported that more than one-fourth of New Markets Tax Credit projects were located in federally designated EZs. State and local governments also can use federal economic development funds to support economic development activities, thereby freeing up state and local funds for business incentives or other uses. Based on our review of state economic development Web sites, states appear to market all but 3 of the 17 programs (Community Facilities Loans and Grants, Farm Ownership Loans, and Farm Operating Loans being the exceptions). The programs that appear to be marketed more than others are the CDBG programs, SBA’s 7(a) and 504 loan guarantees, and private activity bonds (at least 19 states appear to advertise each of these as incentives). Benefits from EZs, ECs, or Renewal Communities, and job- training programs funded with WIA funds were the next most marketed incentives, with at least nine states offering them. This appears to be somewhat consistent with what site-selection consultants told us about the specific federal incentives they see in business incentive packages. The consultants told us that they see CDBG loans funded with Entitlement and State block grants, private activity bonds, EZ/EC benefits and, increasingly, customized job-training funds in incentive packages. In contrast to the results of our Web site reviews, the consultants did not cite SBA loans as being among federal resources included in business incentive packages. Although federal programs are marketed as business incentives, the amount of federal funds used as incentives appears to be more limited than the amount of state and local funds used. While the precise amount of federal funds used as incentives is not available, the Congressional Budget Office (CBO) estimated that the federal government spent $27.9 billion to support commerce and business in addition to $2.2 billion on credit programs in 1995. CBO also indicated that the federal government provides the bulk of its support to businesses through tax provisions. CBO estimated tax revenue losses of at least $32.2 billion for the provision of the tax code that yielded the largest amount of direct support for businesses—depreciation of capital assets in excess of the alternative deprecation system—but did not provide total estimates of foregone revenue associated with all tax provisions. It is not clear from the CBO report whether and to what extent state and local governments also used these programs and tax provisions as incentives. We reviewed academic studies on economic development business incentives offered from 1995 to 2005 and interviewed the authors of these studies. The academic literature on economic business incentives generally focuses on state and local government incentives rather than federal incentives. Academic studies estimate that state and local governments spent from $20 to $50 billion annually on business incentives. While the amount of federal funds used as business incentives has not been measured to any great extent, some researchers with whom we spoke said that the amount of federal funds used as business incentives is likely limited compared to the amount of state and local funds used as incentives. One limitation in developing estimates of federal, state, and local funds spent on incentives is defining what constitutes a business incentive. For example, a state or local government might offer indirect benefits, such as infrastructure improvements, to attract or retain businesses, but these might not be counted in estimates as business incentives. Moreover, although the amount of federal economic development funds available as incentives appears to be limited, money can be fungible, or freely interchangeable, at the state and local level. Thus, even though the amount of federal funds used as incentives might be limited, state and local governments could leverage those funds to free up their own resources for incentives or for other purposes that support businesses. Furthermore, state and local governments have less discretion over the use of federal resources than they do over their own, but the degree of discretion varies with the program. For at least four of the programs (SBA’s 7(a) and 504 loan programs, USDA’s B&I loan program, and IRS’s New Markets Tax Credit), state and local governments have no direct role in funding decisions. For these programs, third-party lenders, development corporations, or the federal government decide which businesses receive funds. In contrast, other programs provide states with more discretion over how they can use funds. For example, under WIA, states and local areas can use the discretionary and statutory funding from Labor to develop job training and employer service programs, including customized job training, which we previously have reported can be an important factor in a company’s decision to locate in a particular area. Finally, the academic literature we reviewed questioned the importance of incentives in location or relocation decisions. These studies, as well as published articles in site selection industry magazines, indicate that other considerations might outweigh economic development incentives when companies decide where to locate. The studies explained that the critical factors in deciding were more likely to be the size and education of the labor force; local infrastructure such as telecommunication lines; transportation options, such as access to ports, roads, and rail; and access to consumer markets. However, the studies and consultants acknowledged that the incentives state and local governments offered could influence a business’ decision when the business already had narrowed its choice to three or four locations. We determined that 9 of the 17 large federal economic development programs that state and local governments can use as business incentives contain statutory prohibitions against using funds to relocate businesses if the relocation would cause unemployment. Seven of the federal economic development programs with nonrelocation provisions were grant programs, and the remaining two were loan guarantee programs. The number of job losses and other requirements needed to trigger the nonrelocation provision varied by program. Nonrelocation provisions for the nine programs were enacted over a 40-year period. Recently, one program has sought but not obtained congressional removal of its nonrelocation provision. Based on our review of laws and regulations for the 17 large federal economic development programs that state and local governments can use as business incentives, we determined that nine contain statutory prohibitions against using program funds to relocate businesses. (See app. II for a more detailed description of each of these nine programs.) They are the two HUD CDBG programs (Entitlement and State programs); the WIA Adult, Dislocated Workers, and Youth programs; USDA and HUD’s respective EZ/EC programs (for designated rural and urban communities, respectively); USDA’s B&I program; and SBA’s 504 program. SBA voluntarily applies a nonrelocation provision to its 7(a) program. All nine programs that we identified with statutory restrictions on employer relocations use job loss in a relocating company’s original location as the primary criterion for applying a nonrelocation provision, but the job loss threshold varies by program. As shown in table 2, the statutory language for three programs—HUD and USDA’s EZ/EC program and USDA’s B&I program—do not specify a job loss threshold, but these agencies interpret the job loss threshold as one job lost. The three WIA programs specify a job loss threshold of one job lost. The remaining three—HUD’s CDBG Entitlement and State programs and SBA’s 504 program—have higher job loss thresholds. In addition to job loss, these three programs specify other conditions for applying a nonrelocation provision, such as requiring that the relocations occur across geographically defined areas. HUD regulations for the CDBG Entitlement and State programs make business relocations ineligible for funding if they involve certain job losses. Any relocation involving the loss of 500 or more jobs is prohibited. In contrast, relocations involving the loss of 25 or fewer jobs are exempt from the nonrelocation provision. For relocations involving between 25 and 500 jobs, the nonrelocation provision applies if the number of jobs lost equals or exceeds one-tenth of one percent of the number of employed persons in the labor market experiencing the loss. The CDBG program’s statute does not specify a job loss threshold; it only requires that the agency prohibit funding for business relocations that are likely to result in a significant loss of employment. According to a HUD official, HUD chose to exempt any relocation involving 25 or fewer jobs because losses of this magnitude likely would not significantly affect a labor market of any size. By exempting these smaller businesses from the nonrelocation provision, this official said that the CDBG program retains some flexibility for entitlement and nonentitlement communities to provide funds to businesses to promote job growth. This official further noted that HUD also determined that relocations involving 500 or more jobs would be significant for labor markets of any size. SBA’s 504 program, which guarantees the portion of a business loan that nonprofit certified development companies make to businesses, features potentially higher job loss thresholds. For example, SBA regulations would require that applications for loans be denied if the relocation would result in the business’s reducing its workforce by at least one-third, or serious unemployment would result in the original business location or any area of the country. SBA regulations allow for the waiver of these job loss limits if the relocations would be key to the economic well-being of the business or if the benefits to the applicant and the receiving community would outweigh the negative impact to the community from which the applicant would move. As noted previously, three of the programs specify conditions in addition to job loss for applying the nonrelocation provision, such as relocations occurring across defined geographic areas and funding thresholds. For example, HUD’s CDBG regulations for both the Entitlement and State programs prohibit funding for a business that relocates to a different labor market area. USDA’s B&I program, through which USDA guarantees up to 80 percent of a loan that an approved third-party lender makes to businesses, statutorily prohibits program funds from supporting business relocations in cases in which USDA assistance exceeds $1 million. Our review of congressional reports indicates that this minimum funding threshold is intended to expedite the processing of small business applications, based on the reasoning that the relocation of small businesses would pose no threat to the labor force or other businesses in the original location. Congressional approval of the nonrelocation provisions for the nine large programs was spread over a 40-year period (1958 to 1998). Table 3 shows the date on which the nine programs became subject to nonrelocation provisions. One of the federal agencies has sought but not obtained removal of a nonrelocation provision from its program. USDA officials said that since 2001 the agency has sought congressional support for the removal of the nonrelocation provision for the B&I program, citing administrative burden and other problems involved with ensuring compliance. A USDA official explained that while Labor has the statutory responsibility to analyze labor market information related to B&I applications—to help ensure that funding will not result in the transfer of any employment or business activity—Labor does not receive separate funding to support analysis of this information. According to USDA, the agency has sent between 6 and 18 B&I applications to Labor for review in the past few years. Labor confirmed that it does not receive separate funding to support its analysis, but said the agency reviews all of the applications Labor provides. Federal agencies administering the nine programs with nonrelocation provisions used various procedures to help ensure that program recipients complied with overall program goals and requirements, but the extent to which these procedures specifically addressed nonrelocation provisions was limited. The Guide to Opportunities for Improving Grant Accountability states that organizations awarding grants need effective internal control systems to provide adequate assurance that funds are properly used and achieve intended results. The two loan guarantee programs—USDA’s B&I and SBA’s 504 programs—relied on screening mechanisms (written review guidance and eligibility checklists or third- party verification of data) to help ensure compliance with nonrelocation provisions. In contrast, officials who administer grant programs we reviewed noted inherent limitations in using screening mechanisms for grant programs, given that program recipients (states and local governments) do not always know at the time of application which businesses later will apply for and obtain assistance through the program. Because of the inherent limitations of screening, the agencies administering grant programs primarily relied on monitoring recipients and subrecipients to help identify instances of potential noncompliance. However, only one of the grant programs we reviewed had developed monitoring guidance specifically tailored to the nonrelocation provision. Without structured guidance and procedures in place, agencies have limited assurance that recipients and subrecipients are complying with statutory and regulatory requirements and spending funds on allowable activities. As stated in the Guide to Opportunities for Improving Grant Accountability, organizations that award and receive grants need effective internal control systems to help ensure that grants are awarded to eligible entities for intended purposes and in accordance with applicable laws and regulations. As shown in table 4, each of the four federal agencies we reviewed had screening procedures covering applicants’ eligibility to receive funds. The agencies used at least one of the following mechanisms: written application or plan review guidance, eligibility checklists, self- certification forms, third-party verification of data, or business statements of compliance. However, only four of the nine programs—including both loan guarantee programs—used screening mechanisms that specifically addressed a relevant nonrelocation provision. All four agencies had procedures for reviewing applications or plans to help ensure that applicants were eligible to receive funds under the program. The two loan guarantee programs—USDA for its B&I program and SBA for its 504 program—had formal written guidance that specifically addressed the screening of applicants for compliance with the nonrelocation provision. USDA’s formal written guidance listed the nonrelocation provision as one of the ineligible purposes of a B&I loan guarantee. SBA also incorporated specific references to its nonrelocation provision into its standard operating procedures, which are addressed to SBA personnel and lending partners who review and approve 504 loans. SBA also required its 504 lending partners to complete an eligibility checklist for each loan guarantee applicant. One of the items on the eligibility checklist seeks to determine whether 504 loan proceeds will be used to “relocate any operations of a small business, which will cause a net reduction of one- third or more in the workforce of the relocating small business or a substantial increase in unemployment in any area of the country.” In reviewing the supporting documentation for 10 approved loans, we found that certified development companies were using the eligibility checklist SBA had developed to screen 504 loan applicants for these loans. Each of the seven grant programs had formal written guidance covering the review of required plans but with the exception of USDA’s EZ/EC program, the guidance did not specifically address the nonrelocation provisions for each program. Under the CDBG programs, recipients (entitlement communities and states) must submit an action plan to HUD each year that broadly identifies the activities that they will undertake to meet the objectives of previously submitted consolidated plans. Labor requires states to submit strategic plans for WIA describing how a state intends to use WIA funds. Both agencies use written checklists as guidance to determine whether the submitted plans are complete and both agencies’ guidance includes an item to determine whether applicants have assured their compliance with applicable laws and regulations. HUD officials noted that its written guidance on review of action plans does not require analysis of the nonrelocation provision, in part because CDBG recipients generally do not know which businesses will apply for CDBG funding at the time the plans are developed and submitted to HUD. HUD officials explained that most CDBG recipients engaged in economic development activities have an “open window” approach, in that assistance is available to businesses on an “as needed” basis during the program year. For the EZ/EC programs, USDA had formal written guidance for reviewing required application plans that referred to the program’s nonrelocation provision, while HUD’s written guidance did not specifically address the provision. Under the EZ/EC program, communities seeking EZ or EC designation submit (1) a strategic plan outlining the community’s vision for revitalizing its distressed area; (2) a tax incentive utilization plan specifying how the community plans to use the tax benefits available under the program; and (3) an implementation plan providing detailed information on the activities and projects the community is undertaking to implement its strategic plan. HUD officials said that while the agency does not currently have review guidance specific to the nonrelocation provision, the agency has been revising a review manual to incorporate language specific to the provision and has been taking other steps, such as communicating directly with EZs regarding compliance and providing training to staff, to raise awareness of the provision and the need to comply with it. USDA officials said that EZ/EC review staff were told to reject any application for EZ/EC designation in which an applicant’s strategic plan included evidence that the community intended to lure businesses from other communities. The officials said that review staff eliminated several applications for potential program designation because intent to relocate jobs was evident in the submitted plans. However, we were not able to verify this statement because USDA officials said that the strategic plans eliminated from contention were discarded and are no longer available for review. Some officials, particularly those who administer grant programs, noted the limitations of reviewing applications and plans to identify instances of potential noncompliance with a nonrelocation provision. As noted above, HUD CDBG officials said that action plans for its Entitlement program were unlikely to identify specific businesses receiving funds because the communities do not always know which businesses would apply for assistance when they submitted the action plans. Similarly, the officials noted that action plans for the State CDBG program do not contain a list of proposed activities, but rather a description of the methods used to distribute funds to local governments. HUD officials noted that under the CDBG State program, individual states implement a method of distributing funds that may or may not include economic development activities and that in most cases the states evaluate applications from local governments to determine which activities to fund. As part of the application review process, USDA’s EZ/EC and B&I programs require applicants to sign self-certification forms that included a specific reference to the nonrelocation provision for each program. For example, USDA’s application for the EZ/EC program contained a form in which an applicant self-certifies that “no action will be taken to relocate any business establishment to the nominated area.” According to USDA EZ/EC officials, this required certification sends a clear message to the EZ/EC community that relocation is not permitted under the program. Similarly, USDA’s B&I program requires loan applicants applying for loans of more than $1 million that will increase employment by more than 50 employees to self-certify that “it is not the intention of the applicant or any related company to relocate any present operation as a result of the proposed project.” Other agencies, such as HUD for both its CDBG and EZ programs and Labor for its WIA programs, require more general statements of compliance. For example, HUD’s application for Round II of the EZ program contained a form in which an applicant self-certified that “the nominating entities shall comply with state, local, and federal requirements, and have agreed in writing to carry out the Strategic Plan if designated.” Similarly, HUD’s CDBG program requires applicants to self- certify their compliance with “applicable laws,” which HUD officials said includes the Housing and Community Development Act of 1974, as amended, which contains the nonrelocation provision. According to the officials, HUD saw no need or statutory basis to add a special certification for the nonrelocation provision, particularly since not all states or entitlement communities use CDBG funding for economic development purposes. Labor’s statement of compliance, included in WIA state strategic plans, requires the governor of each state to assure that WIA funds “will be spent in accordance with the Workforce Investment Act and the Wagner- Peyser Act and their regulations, written Department of Labor guidance implementing these laws, and all other applicable federal and state laws and regulations.” Labor officials noted that this general statement of compliance covers compliance with the nonrelocation provision. During our review of 30 USDA EZ/EC, HUD EZ, and Labor WIA approved grant applications (10 applications for each program), we found that recipients had completed the required self-certifications for each of the applications we reviewed. As part of the pre-approval process for the B&I program, USDA turns over information that certain loan applicants provide to Labor for independent, third-party verification. For guaranteed loans in excess of $1 million that will increase employment by more than 50 jobs, USDA will send an applicant’s certification of nonrelocation and the market and capacity information form to Labor for clearance. In-turn, Labor sends the form to state-level workforce agencies, where the business’ competitors are located, for analysis and direct solicitation of the competitor’s comments. According to USDA officials, Labor must complete this third-party verification before USDA can approve a B&I loan guarantee request. Our review of loan documentation for 10 approved B&I loan applications indicated that both USDA and Labor carried out these procedures for the applications we reviewed. As discussed earlier in this report, USDA officials have been asking Congress to remove the nonrelocation provision from the B&I program, citing an administrative burden and costs incurred in helping to ensure compliance. Regulations for HUD’s Entitlement and State CDBG programs and Labor’s three WIA programs require grant recipients (such as a state or local government) to obtain a signed written statement of compliance with the nonrelocation provision from businesses before providing direct assistance to them. For example, under the CDBG programs, there is a two-step process. First, businesses receiving CDBG assistance must submit a written statement to the recipient (entitlement community or state), subrecipient, community-based development organization, or nonprofit providing the assistance whether the activity will result in the relocation of jobs from one labor market area to another. Second, if the assistance will not result in the relocation of jobs covered by the statutory prohibition, the business must provide a certification that it has no plans to relocate jobs (in a manner that would violate the nonrelocation provision). However, these statements are not included in a recipient’s application for funding (action plan), and thus HUD does not review them during the action plan review process. HUD officials noted that it would not be possible for an entitlement community to provide these statements to HUD with an action plan because, as previously noted, most entitlement communities do not know at that time which businesses will apply for CDBG assistance. Similar to HUD, Labor’s regulations for WIA require that local workforce investment boards conduct a pre-award review of businesses seeking job training funds, which includes obtaining a written certification from the business indicating whether WIA assistance is being sought in connection with past or impending job losses at other facilities. Our review of 10 approved WIA grants indicated that businesses had completed the required statements of compliance for each of those grants. With respect to HUD’s CDBG program, we did find one case in which a HUD CDBG entitlement community recipient we contacted told us that its subrecipient (a nonprofit development corporation) was not obtaining the required written statements of compliance. An official from the entitlement community said that neither the entitlement community nor the subrecipient had developed formal procedures to help ensure compliance with the regulatory requirement. In addition, neither HUD nor Labor require that recipients provide copies of completed written statements to HUD or Labor, although a HUD official noted that the written statements would be available to on-site reviewers during monitoring visits. HUD officials also said that HUD is revising a monitoring handbook to include a question addressing the business’ written statements of compliance. We discuss agency monitoring procedures and guidance in greater detail in the next section. The Guide to Opportunities for Improving Grant Accountability states that once grants are awarded, agencies need to ensure that grant funds are used for intended purposes and in accordance with applicable laws and regulations. The guide also states that it is critical to identify, prioritize, and manage potential at-risk subrecipients to ensure that grant goals are reached and resources are properly used. Due to inherent limitations in using the screening process to help ensure compliance with nonrelocation provisions, other procedures, such as monitoring activities, become key controls. Having established, written procedures in place helps to ensure that agencies achieve their monitoring objectives and that staff are consistently implementing monitoring procedures. Officials at some of the agencies we reviewed told us that they rely on complaints as a mechanism to monitor compliance with the employer nonrelocation provision. A HUD official said that an employer relocation that resulted in significant job loss and involved the use of federal funds likely would result in the affected community or state raising a complaint to the federal agency or to their congressional representatives. HUD, Labor, SBA, and USDA officials all reported receiving few if any of these complaints, in some cases over the course of many years. For this reason, some officials did not consider the risk of noncompliance to pose a significant risk to the programs. However, this complaint-based approach is reactive and does not necessarily provide reasonable assurance of compliance. Standards for Internal Control in the Federal Government states that an agency’s monitoring activities should be performed continually and be ingrained in agency operations. As shown in table 5, the four agencies administering programs with nonrelocation provisions used various other mechanisms, including on-site review, to monitor fund recipients. All of the agencies had formal written guidance covering the monitoring of program participants. However, only one program—HUD for its EZ program—had a monitoring procedure that specifically addressed the nonrelocation provision. To effectively leverage limited staff resources, HUD and Labor told us that their respective agencies conduct on-site monitoring reviews in accordance with risk-based procedures intended to focus monitoring resources on areas requiring the most attention. For example, HUD’s procedures for the EZ program specify factors for reviewers to consider when determining the scope of a review. These factors include funding amount, outstanding complaints related to noncompliance with a legal requirement, and unresolved monitoring or assessment issues. Similarly, for the CDBG program, reviewers consider factors such as the complexity of a state or entitlement community’s activities and use of subrecipients to carry out funded activities. According to HUD CDBG officials, on-site monitoring is the most effective way to identify potential violations of the nonrelocation provision for the CDBG program. Labor also conducts on- site monitoring of states and a sample of local workforce investment agencies. As part of Labor’s risk-based procedures, reviewers may consider factors such as the number of federal grants a state administers, a history of disallowed costs or administrative findings in previous reviews, and percentage of grant funds subcontracted. USDA’s monitoring for the EZ/EC program involves two staff members— one in a state office and the other in the national office—reviewing requests for drawdown that EZ/ECs make several times during the year. Drawdown requests include a specification of how an EZ or EC will use its funds. Prior to disbursing requested funds, USDA staff members review the request to ensure that the funds will be used to carry out the community’s strategic plan (which includes a certification form that specifically refers to the nonrelocation provision and which USDA reviews at the time of initial application). In addition to reviewing drawdown requests, USDA staff in both the state and national offices review mandatory annual reports describing a community’s progress in implementing its strategic plan. According to USDA officials, the review of annual reports also includes a review of any updates to the strategic plan to ensure that no relocation support has crept into the plan since the initial review. A USDA official added that USDA staff have made on-site monitoring visits to all of the rural EZ/ECs. Officials of SBA’s 504 and USDA’s B&I program told us that they do not monitor for compliance with the nonrelocation provision because, unlike federal grant programs, in loan guarantee programs, a federal agency can determine which specific businesses will receive assistance and for what purpose (relocation, equipment purchase, etc.) before an agency guarantees a loan. SBA officials explained that SBA and certified development companies (CDC) approve a project for 504 financing before construction begins, but SBA does not disburse loan funds or issue a debenture guarantee until after the project is completed. According to SBA officials, CDC staff review the completed project before closing on a loan, at which time loan funds are disbursed and a debenture guarantee issued. Similarly, USDA officials told us that their field staff verify uses for loan proceeds when they review a loan closing package, specifically the settlement statement, before guaranteeing a loan. USDA officials explained that once a loan is fully disbursed, subsequent monitoring of the use of loan proceeds for compliance focuses on other issues, such as the number of jobs created, rather than compliance with the nonrelocation provision, because the loan proceeds already have been used for their intended purposes. The emphasis on screening rather monitoring seemed appropriate for the two loan guarantee programs since the federal agencies know which specific businesses are requesting funds and the purposes for which the funds will be used. HUD’s EZ program was the only program we reviewed that had written monitoring guidance specific to the nonrelocation provision at the time of our review. As of July 2007, HUD had used this monitoring guidance in four on-site reviews. HUD’s guide for the review of Round II EZ strategic plan compliance calls for review staff to determine whether there is “any evidence to indicate that the EZ is complying with the prohibition against assisting a business to relocate.” The guide did not provide specific procedures or steps that staff should follow to make the assessment of compliance, but rather referred to the program’s implementing regulation for the nonrelocation provision. HUD officials said that under current procedures, on-site reviewers rely on receiving complaints of noncompliance or on information obtained by asking open-ended questions about compliance to determine whether communities are complying. For the four reviews in which HUD had used the guidance at the time of our review, the narrative supporting the reviewer’s assessment of compliance indicated that approved implementation plans, discussions with EZ staff regarding standard operating procedures, and a review of loan file documents were among the bases on which HUD reviewers determined that EZs were complying with the program’s nonrelocation provision. HUD officials said that for additional on-site reviews planned for fiscal year 2007, the agency is considering reviewing implementation plans to specifically check for compliance with the nonrelocation provision. HUD officials said that they would focus on plans involving sites with potential for commercial development to determine whether HUD-approved activities or projects involving marketing or promotional efforts encouraged relocations to an EZ. HUD and Labor officials told us that their agencies were developing monitoring guidance specific to the nonrelocation provision for the CDBG and WIA programs, respectively, but that such guidance is in draft form. As of July 2007, HUD and Labor had not finalized this guidance or used it in a monitoring review. HUD officials said that HUD expects to finalize the monitoring guidance tailored to the nonrelocation provision by December 31, 2007. The officials explained that HUD was developing monitoring guidance for inclusion in a forthcoming revision to a monitoring handbook that HUD uses for all of its major Office of Community Planning and Development grant programs, including the CDBG and EZ programs. HUD undertook the revisions because the current version of the handbook was issued prior to the promulgation of the CDBG program’s nonrelocation provision in December 2005. HUD CDBG officials stated that including a question on compliance with the nonrelocation provision is intended to ensure that compliance reviews by HUD staff in this area would be consistent. Labor officials explained that their monitoring handbook for employment and training grant programs, including WIA programs, is generic and limited to examining core activities found in all of Labor’s employment and training programs. In contrast, Labor’s formula grant supplement to the monitoring handbook, currently under development and in draft form, will provide a more detailed examination of statutes, rules, and regulations specific to the formula-based programs once finalized. Labor officials said that the formula grant supplement has been tested in field offices and will address the nonrelocation provision. The officials said that they expect to publish the formula grant supplement in the latter half of calendar year 2007. State and local governments use incentives, including funds from federal economic development programs, to attract business investment and create jobs in their communities. Although it is difficult to determine the extent to which state and local governments use federal funds as business incentives, 9 of 17 large federal economic development programs contain statutory restrictions against using program funds to relocate jobs if the use of such funds creates unemployment. Thus, for these nine federal programs, the agencies charged with their administration are responsible for helping to ensure that program funds are used for intended purposes and in accordance with applicable laws and regulations, including compliance with nonrelocation provisions. Each of the four agencies that administer the programs with nonrelocation provisions used screening and monitoring mechanisms to help ensure that fund recipients were eligible to participate in the programs, meeting program goals, and complying with legal requirements. The two agencies administering the loan guarantee programs we reviewed—SBA for the 504 program and USDA for the B&I program—relied primarily upon screening mechanisms to help ensure that applicants would not use loan proceeds to relocate businesses and jobs. For these two programs, screening mechanisms may be sufficient since the agencies can determine which specific businesses will receive assistance and how the loan proceeds will be used. In such cases, a screening process can determine if loan funds will be used to support a business relocation. In contrast, officials from the other programs we reviewed, particularly those that administer grant programs, noted limitations in using screening mechanisms for such programs. For example, with grant programs, fund recipients (e.g., states and local communities) do not always know which businesses will apply for or receive funding at the time the recipient submits an initial plan or application for funding. Acknowledging the limitations of screening for helping to ensure compliance with nonrelocation provisions, agency officials regarded on- site monitoring as the most effective way to detect an instance of potential noncompliance in their grant programs. However, officials also noted that they targeted their limited monitoring resources on recipients that posed the greatest risk. Furthermore, they maintained that noncompliance with nonrelocation provisions did not present a significant risk to the programs they administered because they received few or no complaints over the years and regarded complaints as a barometer for undertaking monitoring activities. We recognize that there are costs associated with monitoring program recipients for compliance with nonrelocation provisions. Nevertheless, a reactive approach in which agencies assume there are no problems because outside parties do not report them, by itself, is an insufficient means to help ensure that problems do not exist and federal internal control standards state that monitoring should be performed continually and be ingrained in agency operations. Further, USDA EZ/EC program officials said that they have rejected applications for zone designation because intent to relocate jobs was evident in the applications, providing evidence that applicants do sometimes seek to use program funds to lure businesses away from one community to another. Given the relatively large size of some federal grant programs and their complicated funding structure (including the number of recipients and subrecipients involved in the process), it is important that agencies develop and use cost-effective approaches to identify, prioritize, and manage potential at-risk recipients. Specific monitoring guidance and procedures would provide staff impetus and direction in their monitoring roles and help ensure consistent monitoring efforts across locations. Moreover, written guidance would provide recipients and subrecipients with specific information on the types of business support activities allowed under each program. For example, we learned that there are HUD CDBG subrecipients who may be unaware of the requirement that businesses receiving assistance under the program must provide written statements of compliance with the nonrelocation provision. Absent such guidance and related controls, agencies have limited assurance that recipients and subrecipients—which include state and local governments as well as individual business—are meeting statutory and regulatory responsibilities that restrict the use of program funds to support employer relocations. To provide greater assurance that grant recipients and subrecipients of federal economic development programs are complying with statutory restrictions against the use of program funds to support employer relocations, we recommend that the Secretaries of Labor (for the WIA Adult, Dislocated Workers, and Youth programs); Agriculture (for the EZ/EC program); and Housing and Urban Development (for the CDBG Entitlement and State programs) direct their respective offices to develop (or finalize the development of) and implement formal and structured approaches for federal reviewers to follow when monitoring for compliance with nonrelocation provisions. We provided a draft of this report to the Secretaries of the Departments of Labor, Agriculture, Housing and Urban Development, and Commerce; the Acting Commissioner of the Internal Revenue Service; and the Administrator of the Small Business Administration. We received written comments from Labor that are summarized below and are reprinted in appendix III. USDA’s Acting Assistant Deputy Administrator for Cooperative Programs provided oral comments on August 8, 2007, which are summarized below. In its written comments, Labor stated that the department concurred with our recommendation that it develop and implement formal and structured approaches for federal reviewers to follow when monitoring compliance with nonrelocation provisions. In addition, Labor stated that it agreed that such guidance and approaches will assist states in monitoring local subrecipient compliance with these provisions. Labor stated that to support efforts to monitor and ensure compliance with nonrelocation provisions, it is implementing two complementary strategies. First, Labor is developing a formal policy guidance letter that clarifies allowable and unallowable uses of WIA funds for economic-development-related activities and that will specifically address prohibitions related to the nonrelocation provision. Second, Labor said that its Core Monitoring Guide and draft Formula Grant Supplement to the guide provide federal reviewers with tools for monitoring compliance with the nonrelocation provision. Labor said the draft Formula Grant Supplement includes indicators of compliance along with each governor’s responsibility to determine which costs are allowable or unallowable under WIA, including prohibitions against using WIA Title I funds to encourage business relocation and related restrictions. Labor stated that its regional office reviewers have extensively tested the draft Formula Grant Supplement since the fall of 2006, and the supplement will enter the formal clearance process shortly. Labor said that when completed in final form, which the department expects to occur by December 31, 2007, the supplement will provide federal reviewers, as well as state review staff, with a valuable resource for assessing recipients’ compliance with the nonrelocation provision under the WIA Adult, Dislocated Worker, and Youth programs. In oral comments, USDA’s Acting Assistant Deputy Administrator for Cooperative Programs stated that USDA concurred with the report’s recommendation. The Acting Assistant Deputy Administrator also provided us with documentation showing that USDA is taking initial steps to implement the recommendation. We also received technical comments from Labor, USDA, HUD, IRS, and SBA that were incorporated into the report as appropriate. Commerce did not provide comments on the draft report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of the report. At that time, we will provide copies of this report to the Ranking Member, Subcommittee on Interstate Commerce, Trade, and Tourism, Senate Committee on Commerce, Science, and Transportation, and interested congressional committees. We will also provide copies of this report to Secretaries of Labor, Agriculture, Housing and Urban Development, and Commerce; the Acting Commissioner of the Internal Revenue Service; and the Administrator of the Small Business Administration. We will provide copies to others upon request. In addition, this report will be available at no charge on our home page at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To identify large federal economic development programs, we conducted a search of the Catalog of Federal Domestic Assistance (CFDA) database (using key word searches of “jobs” and “economic development”) and focused on those programs that can be used to provide assistance to businesses and that CFDA reported as having obligations of at least $500 million for fiscal year 2006. In a prior report, we found inconsistencies in how agencies reported budget data for CFDA, resulting in potential over- reporting of data. However, for purposes of this report, because we are using CFDA to identify large federal economic development programs, the risk is acceptably low of CFDA not covering large programs we would have otherwise selected. We, therefore, consider CFDA to be a sufficiently reliable source of data for use in this report. Because CFDA does not include tax expenditure programs, we searched the Congressional Research Service’s (CRS) Tax Expenditure Compendium (using key word searches of “community development” and “private activity bonds”) for economic development tax expenditure programs that support businesses for which CRS reported as having estimated tax revenue losses of at least $500 million in fiscal year 2006. We also confirmed these budget figures with agency officials. We excluded programs that are only available under specific circumstances or are not available nationwide, such as regional economic development programs or those that are only available under disaster assistance designations. In addition to these database searches, we reviewed each of the 50 states’ economic development Web sites to identify the federal programs that states marketed as incentives or financial assistance for businesses. While this search did not provide us with a comprehensive list of federal programs used as business incentives, it provided us with additional information on how the programs we identified through CFDA and the CRS compendium might be used as incentives. To identify large federal programs currently or formerly subject to restrictions against use for relocating jobs among U.S. communities, we reviewed laws and regulations. Our review included the use of electronic databases. We identified relevant nonrelocation provisions for four federal agencies—the Departments of Housing and Urban Development (HUD), Agriculture (USDA), Labor, and the Small Business Administration (SBA)—and a former provision for one federal agency—the Department of Commerce’s Economic Development Administration (EDA). To assess the completeness of our search results, we interviewed representatives of select federal agencies as well as representatives of economic development trade associations and policy groups. To identify congressional purpose in adopting or rescinding restrictions, we reviewed implementing laws and their legislative histories, including congressional reports and the Congressional Record. To assess federal agency procedures to help ensure compliance with nonrelocation provisions, we requested, obtained, and analyzed the following information from HUD, Labor, USDA, and SBA policies and procedures designed to ensure compliance with data on the number of complaints received regarding the provisions; data on the number of violations identified; and information about any enforcement actions taken, as well as the status of those actions. We also conducted a limited test of agency procedures by reviewing a small and random, but not generalizable sample of case file documentation for each of the programs (generally 10 files for each program). These documents included the mechanisms agencies have developed to screen for compliance with nonrelocation provisions, including an eligibility checklist (SBA’s 504 program); self-certification forms (USDA and HUD’s Empowerment Zone/Enterprise business statements of compliance as a condition of receiving assistance (Labor’s Adult, Dislocated Workers, and Youth programs under the Workforce Investment Act); and third-party verification of data that applicants self report (USDA’s Business and Industry Guaranteed Loan program). Further, we reviewed monitoring guidance and exhibits for each program having such guidance; completed monitoring reports; publications on effective internal control and grant management practices; and recently issued reports we completed on the programs. To supplement our document reviews and testing procedures, we conducted interviews with officials at each agency. The scope of our work in this area was focused mainly on whether the agencies had screening and monitoring procedures. We did not test the effectiveness of the implementation of these procedures. Furthermore, we did not conduct an overall evaluation of the programs, evaluate how well the programs served their intended purposes, or evaluate how nonrelocation provisions affect the relative success of the programs in achieving their intended purposes. We also did not address the impact these programs had on development efforts by state and local governments. We conducted our work from October 2006 through August 2007 in Washington, D.C., and San Francisco and Fresno, California, in accordance with generally accepted government auditing standards. The following is a description of the nine large federal economic development programs that we identified as having statutory restrictions against using program funds to relocate businesses and jobs. Seven are grant programs in which a federal agency provides funds to recipients (generally a state or local government) that, in turn, may provide funds to a subrecipient (such as a nonprofit entity or for-profit business) to facilitate economic development activities. They are the Department of Housing and Urban Development’s (HUD) Community Development Block Grant (CDBG) Entitlement and State programs; HUD and U.S. Department of Agriculture’s (USDA) Empowerment Zone/Enterprise Community (EZ/EC) programs (urban and rural respectively); and the Department of Labor’s (Labor) three Workforce Investment Act (WIA) programs—Adult, Dislocated Workers, and Youth. The two remaining programs—USDA’s Business and Industry (B&I) program and SBA’s 504 program—are loan guarantee programs in which federal agencies guarantee loans that third-party lenders and nonprofit development corporations make. HUD’s CDBG program provides communities with grants for activities that will benefit low- and moderate-income people, prevent or eliminate slums or blight, or meet urgent community development needs. The Entitlement program provides grants to qualifying local governments. The State program provides states with grants for distribution to the smaller, nonentitlement communities. Both programs fund a wide range of activities—including those that support housing, public improvements, public services, and economic development—which involve the use of funds to assist, recruit, and retain individual businesses. According to the Catalog of Federal Domestic Assistance (CFDA), fiscal year 2006 estimated budget authority for the CDBG Entitlement program was $2.6 billion and $1.1 billion for the State program. HUD’s Office of Community Planning and Development (CPD) administers the CDBG program. A headquarters office sets program policy while 43 HUD field offices monitor recipients. HUD distributes funds to entitlement communities and states based on the higher yield of two formulas. See figure 1 for an overview of the funding process for economic development projects involving businesses. Entitlement communities may carry out activities under CDBG directly, or they may award funds to subrecipients, which include, as HUD defines them for the purposes of the CDBG program, governmental agencies such as housing authorities as well as private nonprofit and a limited number of private for-profit entities. Under HUD regulations, subrecipients must enter into a signed, written agreement with entitlement communities regarding compliance with laws and regulations. States distribute their funds to nonentitlement communities for activities such as business financing. The distribution mechanisms vary by state; some states set aside a certain percentage of funds for economic development while others do not take into account the category of activity. Neither HUD nor the states distribute funds directly to citizens or private organizations. Moreover, HUD does not select the business entities that receive CDBG assistance; recipients and subrecipients make these decisions. Businesses receive assistance through the CDBG program either from a recipient (such as an entitlement community) or from subrecipients (such as designated public agencies or nonprofit development corporations). For example, once an entitlement community or a state receives its allocation, businesses may apply for economic development funding, assuming that the recipient has elected to operate an economic development program. This assistance may take the form of loans, grants, technical assistance, or infrastructure improvements. This approach assumes that the recipient’s consolidated and action plans include and authorize these types of economic development activities. For a related GAO product on the CDBG program, see Community Development Block Grants: Program Offers Recipients Flexibility but Oversight Can Be Improved. GAO-06-732. Washington, D.C.: July 28, 2006. HUD and USDA’s EZ/EC program targets federal grants and provides tax relief to distressed communities in urban and rural areas, respectively, to help those communities overcome economic and social problems. EZs and ECs can use grant funds for a range of activities identified in strategic plans, which are developed in conjunction with community stakeholders. Strategic plans outline the urban or rural community’s vision for revitalizing its distressed areas and the activities and projects planned to accomplish this task. These activities can include education, infrastructure development, workforce development, and assistance to for-profit businesses. According to CRS’s Tax Compendium, estimated revenue losses for USDA’s and HUD’s EZ/EC program were $1 billion combined for fiscal year 2006. Congress authorized three rounds of EZ designations and two rounds of EC designations. HUD and USDA have primary oversight over the program, which involves reviewing strategic plans, designating communities as EZs or ECs, and evaluating the progress EZs and ECs make in implementing their strategic plans. However, two other agencies, the U.S. Department of Health and Human Services (HHS) and the Internal Revenue Service (IRS), also have had responsibility for administering the program. For the first round of the program which began in 1993, HHS had fiscal oversight over the program, in which HHS issued grants to states, which served as pass-through entities that in turn distributed funds to individual EZs and ECs. For the second round of the program, which began in 1998, Congress appropriated grant funds through USDA and HUD, but not through HHS. For the third round, which began in 2001, Congress appropriated grant funds for rural EZs but not for urban EZs. In addition to grants, businesses that locate in an EZ or EC can claim tax benefits, such as the Work Opportunity Tax Credit, which IRS administers. Tax benefits have been available in all three rounds of the EZ/EC program, but not the EC program. As shown in figure 2, businesses can receive funds directly from the designated EZ/EC cities or from nonprofit corporations the city establishes to administer the program. For example, EZs/ECs issue requests for proposals and review applications for EZ/EC funding, including those that businesses submit. The EZs/ECs that a corporation oversees generally have a board of directors consisting of community members who review and have final approval for funded activities (with input from advisory committees). Businesses then receive funding in the form of grants, loans, and other assistance. Businesses eligible for federal, state, and local tax benefits claim these benefits directly on tax filing forms. For related GAO products on the EZ/EC program, see Empowerment Zone and Enterprise Community Program: Improvements Occurred in Communities, but the Effect of the Program Is Unclear. GAO-06-727. (Washington, D.C.: Sept. 22, 2006), and Community Development: Federal Revitalization Programs Being Implemented, but Data on the Use of Tax Benefits Are Limited. GAO-04- 306. (Washington, D.C.: March 5, 2004). The WIA Adult and Dislocated Workers programs provide a variety of services to individuals, including help with job searches, skills assessment, and occupational training. The Adult and Dislocated Workers programs provide similar services, but differ in their eligibility requirements. The Youth program is designed to prepare high school students for employment or postsecondary education. All three programs require that states and local areas use a one-stop center approach, which consolidates 16 categories of programs under four agencies (Labor, Education, HHS, and HUD) to provide services for several employment and training programs. In addition to employee services, state and local workforce investment boards may use WIA funds from the three programs to provide services to employers, including helping employers identify and recruit job candidates. States and local boards can also offer various job training programs, such as classroom-based, on-the-job, or customized training to meet employer needs. According to CFDA, fiscal year 2006 estimated obligations for the WIA Adult, Dislocated Workers, and Youth programs were $857 million, $1.181 billion, and $926 million, respectively. Labor oversees all three WIA programs, but states and local boards have flexibility over how they use WIA funds. WIA specifies a different funding source for each of the Act’s main clients—youth, adults, and dislocated workers. Labor distributes WIA funds to states and states distribute funds to local areas based on specific formulas that account for unemployment (see fig. 3 below for an overview of the three WIA program funding streams). Labor allots 100 percent of the adult and youth funds and 80 percent of the dislocated worker funds to states (the Secretary of Labor sets aside 20 percent of the dislocated worker funds primarily for national emergency grants, but these funds can be used for other job training purposes). The states can then set aside up to 15 percent of the funds as discretionary funds to support state employment activities. (For the dislocated worker program, the state can set aside no more than 25 percent of the funds for rapid response activities, such as notifying workers on how to access unemployment and one-stop center benefits in the event of mass layoffs.) The remainder of the funds are distributed to local areas based on a formula. Local workforce investment boards, in turn, may provide services to businesses. Businesses are generally connected to these services through one-stop career centers. For related GAO products on the Workforce Investment Act, see Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005; Workforce Investment Act: Substantial Funds Are Used for Training, but Little is Known Nationally About Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005; and Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO- 03-884T. Washington, D.C.: June 18, 2003. SBA’s 504 loan program provides businesses with long-term, fixed-rate financing for major fixed assets, such as land, buildings, and machinery and equipment. To qualify for an SBA loan guarantee, a project must meet job creation or other community development goals, such as increasing the number of minority-owned businesses in an area. For the job creation requirement, a business must generally create or maintain one job for every $50,000 in SBA assistance. While SBA administers the 504 loan guarantee program, it relies on development companies to originate 504 loans. SBA participates in the 504 loan program by guaranteeing loans that certified development companies (CDC) make. CDCs generally are private nonprofit corporations established to contribute to the economic development of their communities. For a typical 504 loan project, the borrower (a business) must cover at least 10 percent of a project’s costs, a private third-party lender provides at least 50 percent of project costs, and a CDC provides up to 40 percent of project costs. SBA guarantees 100 percent of the CDC’s portion of the loan. According to SBA, in fiscal year 2006, the agency provided 504 program guarantees totaling $5.7 billion. USDA’s B&I program seeks to improve the economic and environmental climate in rural communities by providing guarantees on loans private lenders make to borrowers that meet certain economic development criteria, such as creating employment or encouraging the development and construction of renewable energy systems. The program finances business and industry acquisition, construction, conversion, expansion, and repair in rural areas. Loan proceeds can be used to finance the purchase and development of land, supplies and materials, and start-up costs for rural businesses. USDA administers the B&I program through field offices located in each of the states. A borrower first secures a loan from a USDA-approved private third-party lender. The borrower then applies to USDA for a B&I loan guarantee. USDA will evaluate the application and make a determination on whether the borrower is eligible and the proposed loan is for an eligible purpose, there is reasonable assurance of repayment ability, there is sufficient collateral and equity, and the proposed loan complies with all applicable statutes and regulations. USDA will notify the lender in writing if it is unable to guarantee a loan. USDA also works with the lender to negotiate the percentage of guarantees, but USDA can guarantee up to 80 percent of loans for $5 million or less, 70 percent of loans between $5 and $10 million, and 60 percent of loans exceeding $10 million. According to USDA, in fiscal year 2006, the B&I program guaranteed 350 loans with a face-value of $766.3 million. In addition to the above contact, Harry Medina, Assistant Director; Meghana Acharya; Dianne Blank; Bonnie Derby; Ronald Ito; Terence Lam; John McGrail; Carl Ramirez; Barbara Roesmann; Paul Schmidt; Michael Springer; and Kathryn Supinski made key contributions to this report.
Congress imposed restrictions on some federal programs to prevent funding of business relocations. Congress expressed concerns about state and local governments using federal funds to attract jobs to one community at a loss of jobs to another and about compliance with relocation restrictions. This report (1) identifies large federal economic development programs that state and local governments can use as incentives, (2) identifies which programs contain statutory prohibitions on funding relocations, and (3) assesses whether federal agencies had established and implemented procedures to help ensure compliance with prohibitions. To address these objectives, GAO searched federal databases, reviewed relevant statutes and regulations, and conducted limited testing of agency procedures. GAO identified 17 large federal economic development programs that offer financial assistance and services that state and local governments can use as incentives to attract and retain jobs. While academic studies indicate that it is difficult to quantify the funds used as incentives, particularly given differing definitions of incentives, the use of federal funds for such purposes appears to be more limited than the use of state and local funds. Although academic studies question the overall role and significance of incentives in firms' decisions to (re)locate, researchers with whom GAO spoke noted that incentives could influence firms that already had narrowed their choices. Nine of the 17 large federal economic development programs restrict the use of program funds to support employer relocation. Seven are grant programs, and two are loan guarantee programs. In many grant programs, initial recipients of funds (states and local governments) provide funds to others (e.g., businesses) to facilitate economic development; in loan guarantee programs, third-party lenders approve businesses for eligibility to receive funds. All nine programs prohibit using federal funds to support a business relocation that causes unemployment, but the thresholds for job loss differ. For example, a single lost job would trigger the provision for six programs, but for the other three programs, the job loss threshold is higher. Federal agencies administering the nine programs with a nonrelocation provision used various procedures, including screening applicants and monitoring recipients, to help ensure compliance, but the extent to which these procedures specifically addressed nonrelocation provisions was limited. The two loan guarantee programs emphasized screening procedures to help ensure compliance, and both programs had written guidance and other mechanisms that specifically addressed nonrelocation provisions. Screening may be effective for helping to ensure compliance in loan guarantee programs because federal agencies know at the time of initial application which businesses are requesting funds and how they plan to use them. In contrast, because of the way grant programs are structured, at the time of initial application, grant applicants do not always know which businesses later will apply for or receive assistance. As a result, officials administering grant programs relied more extensively on monitoring than screening to help identify instances of potential noncompliance. Despite this greater reliance on monitoring, only one of the grant programs GAO reviewed had written monitoring guidance that specifically addressed business relocation restrictions. Without formal policies and procedures, federal agencies have limited assurance that grant recipients and subrecipients are complying with statutory requirements that restrict the use of program funds to support employer relocations.
The primary cause of ocean acidification is an increase in carbon dioxide in the oceans that is caused by increasing levels of carbon dioxide in the atmosphere. Human activities—including the burning of fossil fuels, cement production, deforestation, and agriculture—release carbon dioxide into the atmosphere. Since the 1700s, atmospheric carbon dioxide concentrations have risen from approximately 280 parts per million to approximately 400 parts per million (see fig. 1). As the carbon dioxide concentration in the atmosphere increases, more carbon dioxide is absorbed by the oceans, where it reacts with water to form carbonic acid, most of which separates to form a hydrogen ion and a bicarbonate ion.the pH of the water. The resulting increase in hydrogen ion concentration is what lowers Since the 1700s, the average surface ocean pH has decreased from about 8.2 to 8.1, a change representing an approximately 26 percent increase in ocean acidity. At the current rate of carbon dioxide emissions, scientists project that, by 2100, the average pH of the ocean surface will drop to between 7.9 and 7.7. Such a drop would correspond to a rise in acidity of approximately 100 percent to 200 percent over preindustrial levels. Moreover, the current rate of acidification is believed to be faster than at any point in at least the last 20 million years. In addition to increasing acidity, the higher levels of carbon dioxide in the oceans also cause chemical reactions that reduce what is known as the “saturation state” of calcium carbonate minerals such as aragonite and calcite. As the saturation state of these carbonate minerals decreases, some marine organisms will need to use more energy to acquire the carbonate ions needed to build shells or skeletons. In addition, when the saturation state drops below 1 (i.e., undersaturated), then structures such as animal shells that are made of carbonate minerals may begin to dissolve. As the oceans absorb more carbon dioxide, scientific models predict that the saturation state of many surface waters, including areas supporting rich fisheries, will continue to decline (see fig. 2). The carbonate mineral saturation state is affected by several other factors in addition to carbon dioxide. For example, the temperature, pressure, and salinity of the ocean water at a particular location affects the saturation state. As a result, the latitude of a particular ocean location, the depth of the water there, and the extent to which it receives freshwater input from rivers all play a role in determining the saturation state. In general, saturation state is highest in warm, shallow, saline waters. Therefore, the Arctic Ocean, which has colder water and receives large amounts of fresh water from rivers and melting ice, and deepwater environments generally have naturally lower saturation states. Biological processes also affect the pH and saturation state of the oceans. During the day, marine plants, including phytoplankton and seagrasses, use sunlight and carbon dioxide to create and store energy, a process known as photosynthesis.dioxide levels, thus increasing the pH and carbonate mineral saturation states of the upper reaches of the ocean. Conversely, as organic (i.e., This activity reduces carbon carbon-based) matter decomposes, carbon dioxide is released back into the water, thus decreasing pH. Some of the decomposing organic matter sinks into deeper water, which, over time, has contributed to deeper waters naturally being lower in pH and having lower saturation states with respect to calcium carbonate minerals. The effect of biological processes on ocean chemistry also means that pH and saturation states vary more widely in coastal and estuarine waters than in the open ocean. Coastal and estuarine areas have high concentrations of plant life and receive freshwater inputs from rivers, factors that influence ocean chemistry. The resulting variation may occur both daily, because of the effect of photosynthesis, and seasonally, due to changes in the amount of available solar energy and of river flows. Although the primary cause of ocean acidification is the increase in global atmospheric carbon dioxide emissions, other factors also contribute to acidification, particularly in coastal and estuarine areas. In particular, increased nutrient pollution (e.g., of nitrate, phosphate, and iron) from agricultural fertilizers and from septic systems and sewage treatment plants results in higher than normal levels of biological growth, a condition known as “eutrophication.” In the short run, the faster growth of plants such as phytoplankton may raise the pH of coastal and estuarine waters by consuming carbon dioxide in the water and releasing oxygen. In the long run, however, when plants die, the decomposition process releases carbon dioxide and may lower the pH, sometimes substantially.addition, local sources of air pollution may also contribute to acidification in coastal and estuarine areas. For example, in addition to releasing carbon dioxide into the atmosphere, the burning of fossil fuels releases other gases such as nitrogen oxides and sulfur oxides that can be In deposited on coastal and estuarine waters, or on streams and rivers flowing into coastal waters, lowering the pH. Ocean acidification could have a variety of potentially significant effects on marine species, ecosystems, and coastal communities, according to the six summary reports we reviewed. However, the scientific understanding of these effects is still developing, and there remains uncertainty about their scope and severity. Calcifying species—that is, species that produce shells and skeletons composed of calcium carbonate minerals—are expected to be among the most vulnerable types of species to ocean acidification. Not all calcifying species, however, are expected to respond the same way to reductions in carbonate ion concentrations resulting from ocean acidification, and some are expected to be more at risk than others. For example: Mollusks. Ocean acidification could negatively affect the survival and growth of some mollusks—including bivalve species such as oysters and mussels—by making calcification more difficult, according to the summary reports we reviewed. Among bivalve species that have displayed a negative response to ocean acidification, larval and juvenile bivalves may be more susceptible to harm than adult bivalves. For example, according to a 2013 study of Pacific oyster (Crassostrea gigas) larvae, during the period of initial shell formation in the early days of an oyster’s life, oyster larvae rely primarily on the energy derived from their egg reserves to build their shells because the oysters have not yet developed their primary feeding organ. When exposed to an environment with a lower carbonate ion concentration, however, the larvae have to expend more energy to build their shells. This increased energy expenditure during the period of initial shell formation makes it more difficult for the oysters to develop their feeding organs and reduces the probability that the oysters will survive. Figure 3 compares Pacific oyster larvae raised in acidified conditions (characterized by reduced pH levels and carbonate mineral saturation states) with those raised under more favorable ocean conditions. The figure shows that the Pacific oyster larvae raised under acidified conditions experienced impaired shell development, and their shells had various deformities compared with the larvae raised under more favorable conditions. Even in instances where oysters are able to survive their larval stage under acidified conditions, the greater energy required to produce their shells could contribute to decreased growth in later life stages. For example, a 2012 study examined the effects of ocean acidification on the Olympia oyster (Ostrea lurida) and found that larvae raised in water with a low pH (7.8) exhibited a slower shell growth rate that continued even once the oysters became juveniles. Not all mollusk species have responded negatively to ocean acidification conditions in research experiments, according to the reports we reviewed. One of the reports noted that roughly half of the mollusk species examined have displayed no effects from ocean acidification, and that a few mollusk species have displayed positive effects. For example, one study cited in the report found that mortality rates declined for juveniles of one clam species (Ruditapes decussatus) kept in seawater with artificially lowered pH levels compared with juveniles kept in seawater at higher pH levels. Corals. Corals were also identified in the summary reports we reviewed as being among the calcifying species most at risk for harm from ocean acidification. Ocean acidification may make it more difficult for some corals to grow their skeletons due to the reduced carbonate ion concentration and may affect coral reproduction. In addition, ocean acidification could potentially increase the rate at which coral skeletons dissolve and erode. However, not all coral species are expected to be negatively affected by ocean acidification, according to the reports we reviewed. and Reduced pH: Variable Responses to Ocean Acidification at Local Scales?” Journal of Experimental Marine Biology and Ecology, vol. 396 (2011). could cause behavioral changes in some fish species by interfering with the functioning of sensory systems, such as the ability to smell. Such changes in sensory systems could, in turn, produce behavioral changes that alter predator-prey interactions, among other things. For example, one study cited in two of the summary reports we reviewed found that clownfish larvae raised in an elevated carbon dioxide environment became attracted to the scent of predators rather than avoiding this odor, whereas clownfish larvae raised under carbon dioxide conditions similar to the current environment exhibited a strong avoidance to the odor of predators. Similarly, another study found that juvenile damselfish and cardinalfish living in ocean waters with naturally high carbon dioxide levels—caused by nearby volcanic vents that emit carbon dioxide and lower the pH of surrounding waters—were attracted to predators’ odors and exhibited bolder behavior than fish from ocean waters that were unaffected by the volcanic vents.behavioral changes cause species to be at a higher risk from predators, such changes could potentially lead to increased mortality rates for affected species. However, according to the reports we reviewed, the extent to which other fish species—including commercially important fish such as salmon—are vulnerable to behavioral effects caused by changes in the functioning of sensory systems due to ocean acidification is unclear. To the extent these types of Another way in which ocean acidification could affect marine species is by enhancing photosynthetic processes in ocean waters with elevated carbon dioxide concentrations. In particular, the possibility that some marine plant species, such as certain seagrasses, could experience increased photosynthesis and growth was highlighted by the summary reports we reviewed as a primary example of the potential for ocean acidification to benefit some species. However, this type of beneficial effect appears to be variable among photosynthetic species, and the net effect of ocean acidification on some marine photosynthetic species could be negative. For example, the reports we reviewed noted that marine macroalgae (seaweeds) are expected to display a diverse response to ocean acidification, in part because macroalgae include a mix of calcifying and noncalcifying species. Since ocean acidification may reduce calcification abilities, the growth of calcifying macroalgae may be compromised under future ocean acidification conditions even if these species experience increased photosynthesis, whereas the growth of noncalcifying macroalgae is more likely to be enhanced. For instance, in a study cited in one of the reports we reviewed, researchers examined the abundance of calcifying versus noncalcifying species of macroalgae near volcanic vents in the ocean and found that the abundance of noncalcifying macroalgae increased as the pH declined, whereas the abundance of calcifying macroalgae decreased. The presence of other stressors in the marine environment—such as warming ocean temperatures, hypoxia, and pollution—were highlighted in the summary reports we reviewed as factors that make it difficult to determine how ocean acidification will affect different species. Knowledge about the simultaneous effects of ocean acidification and these other stressors on species is incomplete, but these stressors could potentially exacerbate the effects of ocean acidification. For example, a recent study found that the combined effects of hypoxia and ocean acidification were more severe on early life stage bivalves than would be expected for either stressor on its own. reviewed stated that ocean acidification may compromise the resiliency of corals to the other threats they face, such as increased ocean temperatures, by, for example, causing some coral species to be more susceptible to coral bleaching. Hypoxia is a condition where waters have low dissolved oxygen concentrations. Gobler et al., “Hypoxia and Acidification Have Additive and Synergistic Negative Effects on the Growth, Survival, and Metamorphosis of Early Life Stage Bivalves,” PLoS ONE, vol. 9, issue 1 (January 2014). effects on different species is the potential for species to adapt to changes in ocean chemistry. In this context, adaptation refers to the ability of a species to evolve over successive generations to become better suited to its habitat, which in the case of ocean acidification would mean a reduced carbonate ion, lower pH ocean environment. If species are successful in adapting genetically to these changing conditions, it could be possible for them to avoid some of the negative effects that might otherwise occur from ocean acidification. For example, one of the summary reports we reviewed noted that it might be possible for some coral species to evolve a mechanism that would enable them to calcify at normal rates even in waters with lower carbonate ion concentrations, although this type of adaptation has not been documented in corals. Moreover, since the changes to ocean chemistry currently taking place are occurring rapidly on evolutionary timescales, the reports we reviewed stated that it is unknown whether species will be able to adapt to the expected rate and magnitude of ocean acidification’s changes. The potential for ocean acidification to alter marine food webs was identified in the reports we reviewed as one of the most significant ecosystem-level effects that could result from ocean acidification.difficult to predict exactly how a change to a species in one part of a food web will affect other species, but if ocean acidification were to negatively affect species at lower levels in a food web, it is possible those negative effects could lead to broader ecosystem effects involving species at higher levels. For example, pteropods—a type of small calcifying sea snail—represent an important component of some marine food webs and were highlighted as potentially being vulnerable in several of the summary reports we reviewed. Among other things, pteropods are an important food source for salmon and other animals such as seabirds and whales. Some pteropods’ calcification and growth rates decline as pH levels decrease, and their shells can partially dissolve under acidified conditions (see fig. 4). For example, a recent study found a strong relationship between increased shell dissolution in one pteropod species (Limacina helicina) and reduced carbonate ion concentrations off the It is West Coast of the United States.acidification on pteropod populations is unknown, but if pteropod populations were to decline, there could be cascading effects through the food webs of some marine ecosystems—including those supporting economically important fisheries such as salmon. The effects of ocean acidification on marine species and ecosystems may affect the goods and services they provide, which may cause economic disruptions, according to the summary reports we reviewed.example: Shellfish harvest and aquaculture. Shellfish harvest is the most valuable sector of the commercial fishing industry in the United States, accounting for 53 percent of U.S. commercial fishing landings and valued at approximately $2.7 billion in 2012, according to NOAA. In addition, NOAA reported that the approximate value of shellfish produced by aquaculture in the United States in 2011 was $422.0 million. Potential declines in the health of shellfish populations from ocean acidification could negatively affect both shellfish harvest and aquaculture. Oyster aquaculture has already experienced significant disruption in the Pacific Northwest. The Washington State Blue Ribbon Panel on Ocean Acidification reported that, between 2005 and 2009, acidified conditions killed billions of oyster larvae at two of the three primary hatcheries that provide Pacific oysters to growers in the Northwest, disrupting the industry throughout the region. A representative from one of the affected hatcheries said that the hatcheries—assisted by federal and state agency efforts to improve monitoring of ocean chemistry—have been able to change the systems that bring seawater into their hatcheries to avoid particularly low pH levels. Nonetheless, both the hatchery and agency officials we spoke with expressed concern that future ocean conditions could exceed the hatcheries’ ability to adapt. Similarly, crab harvest constitutes an important component of the fishing industry in Alaska, and recent research suggests that some juvenile crab species are less able to survive in lower pH waters, which could lead to declines in an industry that, in the Bering Sea and Aleutian Islands, harvested crabs valued at approximately $245 million annually, on average, from 2008 through 2012. Finfish harvest. As with shellfish, potential future declines in the health of finfish populations from ocean acidification could negatively affect their harvest, which NOAA reported was valued at approximately $2.4 billion in 2012. Finfish—including those that are important components of commercial marine fisheries such as salmon, pollock, and cod—may be directly affected by ocean acidification. In addition, if ocean acidification changes marine food webs or habitat—such as coral and oyster reefs—that are important to finfish reproduction, growth, and survival, the finfish industry could also be affected. Tourism and recreation. Marine ecosystems also generate important economic and social benefits from tourism and recreation that may be at risk from ocean acidification. Recreational saltwater fishing contributed approximately $17.5 billion to the U.S. gross domestic product in 2011, including money spent on food, lodging, and transportation, according to an estimate by an industry trade association. In addition, NOAA has reported that millions of people visit coral reefs to dive, snorkel, and sightsee, contributing significantly to local economies in Florida, Hawaii, and many U.S. territories. If ocean acidification leads to declines in the health of marine ecosystems, the economic and social benefits from marine tourism and recreation could also decline. Storm protection. Certain marine ecosystems, such as coral and oyster reefs, help protect coastal communities from flooding caused by hurricanes and other storms, thereby reducing the damage and social disruption that can accompany such storms. If ocean acidification harms such ecosystems, the storm protection benefits they provide could decline. The expected effects of ocean acidification may disproportionately affect some regions and communities that are strongly connected economically and/or culturally to the goods and services provided by marine ecosystems. For example: Fisheries are important in both New England and on the West Coast, but one economic study estimated that potential revenue losses due to ocean acidification would be four times higher in New England, due to the significance of the shellfish harvest to the local economy. Moreover, some communities may be disproportionately affected. For example, in recent years, New Bedford, Massachusetts, has been the U.S. port with the highest value of fish landed, largely due to its scallop fishery. The study concluded that a decline in scallop revenue due to ocean acidification would further depress a community that has struggled economically. Similarly, shellfish aquaculture operations are major employers in Pacific and Mason counties in Washington State; consequently, if ocean acidification harms shellfish aquaculture, it could have a significant economic impact on these communities. Harvesting fish and shellfish is an important element in many tribal communities for economic, dietary, and cultural reasons. Many tribal communities in Washington State view ocean acidification both as an economic issue and, because salmon and shellfish are important ceremonial foods, as a threat to their identity and cultural survival, according to one of the summary reports we reviewed.In some coastal communities, harvesting marine resources is not just an economic activity but “a way of life.” For example, commercial fishing not only provides household income but also is intertwined with how families, generations, and the community as a whole interact with each other and with nature, according to one of the summary reports we reviewed. Federal agencies have taken a variety of steps to implement FOARAM and to support the federal response to ocean acidification more broadly.However, the agencies have yet to complete other FOARAM requirements. The Subcommittee on Ocean Science and Technology implemented the FOARAM requirement to establish an interagency working group and subsequently delegated responsibility for developing the required research and monitoring plan to the working group. NOAA, the National Science Foundation, and NASA—the three agencies required by FOARAM to take specific actions related to ocean acidification outside of the working group—have also taken steps to implement those requirements. In addition, the other federal agencies that are part of the interagency working group have taken steps to support the federal response to ocean acidification (see app. I). The agencies participating in the interagency working group have estimated that from fiscal year 2010, the fiscal year after FOARAM was enacted, through fiscal year 2013 they have spent approximately $88 million ($22 million annually, on average) on activities directly related to ocean acidification (see app. II). The interagency working group on ocean acidification is composed of senior representatives from 11 federal agencies involved in responding to ocean acidification. FOARAM specified five agencies—NOAA, the National Science Foundation, NASA, U.S. Fish and Wildlife Service, and U.S. Geological Survey—as well as “other federal agencies as appropriate,” to be part of the working group. In addition to the agencies specified in the act, as of August 2014, six agencies with missions that could be affected by ocean acidification—the Bureau of Ocean Energy Management, Department of Energy, Department of State, Environmental Protection Agency (EPA), U.S. Department of Agriculture, and U.S. Navy—have also joined the working group.interagency working group, which is chaired by NOAA and vice-chaired by NASA and the National Science Foundation, has met approximately quarterly. Since January 2010, the One of the primary tasks of the interagency working group has been to guide the development of a research and monitoring plan, which FOARAM required to be developed by March 2011. was approved by the Office of Science and Technology Policy and released to the public in March 2014, outlines key efforts identified by the interagency working group that need to be taken over the next 10 years to advance the nation’s understanding of, and ability to respond to, ocean acidification. The working group has also issued two reports, required by FOARAM, describing federal actions related to ocean acidification research and monitoring. The first report was required to be issued by March 2010 and the second to be issued two years later. The first report was issued in March 2011 and the second in 2013. Interagency Working Group on Ocean Acidification, Initial Report on Federally Funded Ocean Acidification Research and Monitoring Activities and Progress in Developing a Strategic Plan (Washington, D.C.: March 2011), and Second Report on Federally Funded Ocean Acidification Research and Monitoring Activities and Progress on a Strategic Research Plan (Washington, D.C.: 2013). acidification on species and ecosystems is needed to determine the magnitude and extent of acidification and to advance research. The plan’s goals include evaluating existing ocean monitoring systems that could be expanded to monitor ocean acidification (e.g., by adding new sensors) and identifying regions where new monitoring systems may be warranted. Modeling to predict changes in ocean chemistry and impacts on marine ecosystems and organisms. The plan stated that models are needed to help predict likely changes to ocean chemistry and marine ecosystems resulting from ocean acidification and to provide information that can inform resource management decisions (e.g., decisions related to managing fisheries). The plan’s goals include developing and improving models that can be used to predict direct and indirect effects of ocean acidification on culturally, economically, and ecologically important species. Technology development and standardization of measurements. The plan stated that new technologies and standardization of measurements are required to support research and monitoring. The plan’s goals include (1) developing standardized methodologies for measuring how plants and animals respond to ocean acidification and (2) improving the accuracy and affordability of monitoring equipment. Assessment of socioeconomic impacts and development of adaptation and mitigation strategies. The plan stated that better understanding the social and economic effects of ocean acidification can help inform discussions about how society can adapt to it and mitigate its causes. The plan’s goals include developing models to estimate the economic effects of ocean acidification and assisting national, state, and local governments and businesses to develop adaptation plans. Education, outreach, and engagement. The plan recognized the importance of effective outreach and education to improve awareness of the potential effects of ocean acidification and to engage stakeholders (e.g., nongovernmental organizations, fishing industry representatives, and natural resource managers) and the public in a discussion of policy options for responding to acidification. The plan’s goals include engaging federal and academic partners to develop and implement outreach programs. Data management and integration. The plan stated that the success of the federal response to ocean acidification depends on effective data management and recognized that it is critical that data be shared and integrated across organizational boundaries and blended from diverse information systems. The plan’s goals include establishing a program or office to manage ocean acidification data collection and determining how the data will be archived and accessed. NOAA formally established an ocean acidification program, required by FOARAM, in May 2011. The program is staffed by a director and two to three other staff and is overseen by an executive board consisting of senior officials from the four NOAA offices involved in ocean acidification. Because of the number of NOAA offices involved, a primary responsibility of the ocean acidification program is to coordinate all of NOAA’s actions related to ocean acidification. It also coordinates and collaborates with other agencies, stakeholders, and researchers, both within and outside of the United States. NOAA ocean acidification program officials estimated they spent approximately $6 million annually, on average, between fiscal year 2011 and fiscal year 2013, to support the types of activities envisioned in the interagency working group’s research and monitoring plan. Program officials estimated that the program has directed about 50 percent of its funds towards ocean acidification monitoring and about 20 percent toward research on species’ responses to acidification, with the remainder aimed at improving scientific models of acidification, data management, and other activities. Examples of program actions include: Improving ocean acidification monitoring capabilities. NOAA has taken steps to address FOARAM’s requirement to establish a long- term ocean acidification monitoring program. For example, the agency is working with state agencies and regional stakeholders, including regional associations of the Integrated Ocean Observing System, to identify (1) locations where additional monitoring capabilities would be useful and (2) opportunities for adding ocean acidification monitoring equipment to locations where other monitoring equipment already exists. NOAA’s ocean acidification program, often in conjunction with state agencies and others, has also helped fund deployment of new monitoring assets in some locations, but the director of the program told us that funding levels have hindered the agency’s ability to further expand monitoring networks. NOAA has also taken steps to address FOARAM’s requirement to coordinate its monitoring activities with international partners. For example, NOAA, along with international partners and others, has sponsored workshops to, among other goals, develop consensus within the international scientific community on the chemical, physical, and biological variables that an ocean acidification monitoring network should measure and on data collection protocols to ensure appropriate data quality and comparability.hindering development of a monitoring network, including, for example, limitations in the quality of existing monitoring equipment. crab or scallop populations might have on fishing communities in Alaska or New England, respectively. To respond to FOARAM’s requirements for the agency, the National Science Foundation established ocean acidification research as a specific agency focus for fiscal years 2010 through 2014. During these years, the agency issued four solicitations requesting scientists to submit research proposals related to ocean acidification. Overall, from fiscal year 2010 through fiscal year 2013, the National Science Foundation selected approximately 50 proposals to receive funding and directed an estimated $11 million annually, on average, to ocean acidification research. The projects selected for funding covered diverse aspects of ocean acidification, including (1) changes to ocean chemistry in a variety of locations; (2) effects on the biological, chemical, and physical processes of a variety of marine species; and (3) how the effects on species might affect different ecosystems. Funded projects included examinations of: Changes to ocean chemistry during a previous geological period, research that could provide insights into the short- and long-term impacts of human-caused carbon dioxide emissions on surface ocean pH and carbonate chemistry. The effects of elevated carbon dioxide levels in the ocean on a pteropod species (Limacina retroversa) in the Gulf of Maine that is preyed on by commercially important fish species. The effects of elevated ocean carbon dioxide levels on the growth, calcification, and physiology of corals and other species inhabiting a remote coral reef in the Pacific Ocean. The National Science Foundation has reported that its most recent solicitation, issued in fiscal year 2014, is expected to be the final one requesting research proposals specifically for ocean acidification. A senior agency official told us that the agency would continue to fund ocean acidification research in the context of its overall research program, although likely at a lower level of funding. The official also said that it is common for the agency to focus research on a specific issue for a few years and then reintegrate that issue into its overall program. NASA maintains a system of satellites that collects data on many aspects of the Earth, including on aspects of the global carbon cycle and ocean ecology that are relevant to ocean acidification, and has made its data available to other researchers. It also has provided funding to outside researchers to study ocean acidification. Between 2007 and 2012, the agency issued approximately 10 solicitations for research and provided funding for four research projects, according to an agency official. For example, one project funded by NASA is examining the effects of ocean acidification on ocean chemistry and phytoplankton photosynthesis in the Arctic Ocean. The agencies have yet to implement the following FOARAM requirements: (1) establish each agency’s role in implementing the research and monitoring plan and outline the budget requirements for implementing the plan, (2) establish an ocean acidification information exchange, and (3) develop adaptation and mitigation strategies to conserve marine organisms and ecosystems. The agencies completed the research and monitoring plan required by FOARAM, but that plan does not include all of the required elements. Specifically: FOARAM requires the research and monitoring plan to set forth the role of each agency in implementing it, but the plan does not do so. Interagency working group officials told us they expect that additional information on the roles and responsibilities of the agencies will be provided in an implementation plan, which the working group has begun developing, according to its chair. Our previous work on interagency collaborative efforts has found that clarifying the roles and responsibilities of the participating agencies is an important factor in the success of such efforts. Until the specific roles and responsibilities of the agencies are clarified, it will be difficult for the working group and its member agencies to make progress in implementing important actions called for in the research and monitoring plan. FOARAM also requires that the research and monitoring plan outline the budget requirements for each agency to implement the plan’s research, monitoring, and assessment activities, but the plan does not include them. According to the previous chair of the working group, a high-level estimate of the federal funding needed for each agency to implement the research and monitoring plan was developed during the early drafting of the plan, but this information was excluded from the final plan at the direction of the Office of Management and Budget. Many officials and stakeholders we interviewed said that the level of funding directed to ocean acidification to date has been insufficient given the potential scope and severity of effects expected in the future. Some of the officials expressed concern that excluding budget estimates from the research and monitoring plan has prevented the agencies and Congress from accurately understanding the funding needed to implement the plan and how it compares with current funding levels. Developing and disclosing estimates of the needed funding may be particularly important for efforts involving multiple agencies. In our previous work on interagency collaborative efforts, we reported that effective interagency collaborative efforts require, among other things, the identification of the types and level of resources needed to implement the planned activities. FOARAM requires the Subcommittee on Ocean Science and Technology to establish or designate an ocean acidification information exchange. The subcommittee delegated this responsibility to the interagency working group on ocean acidification, the group they also tasked with development of the research and monitoring plan. stored and made available to government officials, researchers, and the public. The chair of the working group told us the group has not established a single exchange but said that information on ocean acidification, including on federal agencies’ actions and research results, is available on the working group’s and other federal websites. The chair also said she recognized the value of establishing a single exchange but that doing so was a lower priority than other needed actions, such as developing the research and monitoring plan. Nonetheless, it has been more than 5 years since FOARAM was enacted, and some stakeholders we interviewed said that, without establishing a single information exchange, researchers and the public may have difficulty accessing all of the information on ocean acidification that the agencies are developing. In addition, our previous work has found that information technology, such as shared databases and web portals, can be a tool that facilitates interagency collaboration. According to NOAA officials, the National Oceanographic Data Center could serve as a building block for an ocean acidification information exchange. The interagency working group has not developed the adaptation and mitigation strategies to conserve marine organisms and ecosystems exposed to ocean acidification that are required by FOARAM. The research and monitoring plan developed by the interagency working group includes a high-level discussion of adaptation and mitigation, but it does not clearly describe adaptation and mitigation strategies. The chair of the working group said that the research and monitoring plan recognizes the importance of these topics but that more research on the effects of ocean acidification needs to be done before appropriate adaptation and mitigation strategies can be fully developed. For example, the research and monitoring plan states that future research on organisms’ responses to ocean acidification could assist with developing adaptation strategies. Research could, for instance, identify certain genetic strains in shellfish species that may be more tolerant of acidification, which could help aquaculture operations adapt to more- acidic conditions. Similarly, research on the effects of ocean acidification on species and ecosystems could assist government agencies in developing options for fishery management or assist businesses and communities in adapting to changing conditions in the future. In regard to mitigation, many officials and stakeholders we interviewed said that without timely action to mitigate its root causes, ocean acidification is likely to have significant impacts. The research and monitoring plan identified two approaches to mitigate the causes of ocean acidification: (1) reducing carbon dioxide levels in the atmosphere and (2) reducing the impact of other environmental stressors—such as nutrient runoff pollution—that can exacerbate the effects of acidification.however, did not provide a strategy for addressing these issues. Further action could be taken to advance the federal response to ocean acidification. Our previous work on interagency collaboration has found that the federal government has used a variety of mechanisms to implement collaborative efforts involving multiple agencies. These mechanisms include, among others, (1) establishing an interagency working group, (2) creating an independent interagency office with its own authority and resources, and (3) designating one or more agencies as the lead for the effort. In some cases, agencies have used more than one mechanism to implement a collaborative effort. Eleven agencies with widely varying missions are contributing to the federal response to ocean acidification. The research and monitoring plan developed by the interagency working group identified a number of goals and priorities to help guide the federal response to ocean acidification, but in many cases it is unclear which agencies will be responsible for taking action to implement them. The working group has recommended that an independent national ocean acidification program office be established to coordinate the next steps in the federal response. The National Research Council has concurred with this recommendation. Key functions envisioned for the proposed office include: facilitating coordination among federal agencies, academic researchers, and other stakeholders; developing an implementation plan that outlines the specific actions needed to achieve the goals presented in the research and monitoring plan; coordinating U.S. ocean acidification research and monitoring activities with international entities conducting similar work; establishing an ocean acidification information exchange; and developing a comprehensive ocean acidification data management plan. It is uncertain, however, when, or if, a national program office will be established. According to the former chair of the interagency working group, such an office has not been established because the working group has been unable to reach agreement on how it should be funded. Given the uncertainty about the proposed national program office, some officials we interviewed identified other options that could be pursued, such as designating NOAA as the lead agency to implement the next steps in the federal response and fulfill the functions the working group envisioned for a national program office. Regardless of the option chosen, until there is greater clarity on which entity is responsible for coordinating the next steps in the federal response to ocean acidification, completing important actions, such as implementing the research and monitoring plan, will be difficult. In response to FOARAM’s requirements, an interagency working group including 11 federal agencies has been established and has begun taking steps to better understand and respond to ocean acidification. One important action the working group has taken is the development of an ocean acidification research and monitoring plan, which outlines key efforts needed to advance the nation’s understanding of and ability to respond to acidification. However, federal efforts to implement FOARAM are incomplete. Because the research and monitoring plan does not establish each agency’s role or the budget needed for implementation, as required by FOARAM, it is unclear to what extent the actions outlined in the plan will be taken. In addition, research results and other information related to ocean acidification are available on various federal websites, but the information has not been consolidated into a single ocean acidification information exchange as required by FOARAM, which may make public access and scientific research more difficult. Finally, the research and monitoring plan lays out a broad scope of work, but an entity has not been designated to coordinate the plan’s implementation, or to identify and take whatever additional steps may be needed to help the nation address ocean acidification in the future. Without designating such an entity, federal agencies may struggle to advance the federal response to ocean acidification. To improve the federal response to ocean acidification, we recommend that the appropriate entities within the Executive Office of the President, including the Office of Science and Technology Policy and the National Science and Technology Council’s Subcommittee on Ocean Science and Technology, in consultation with the agencies in the interagency working group, take the following four actions: Clearly define the roles and responsibilities of each agency with regard to implementing the Strategic Plan for Federal Research and Monitoring of Ocean Acidification. Estimate the funding that would be needed to implement the Strategic Plan for Federal Research and Monitoring of Ocean Acidification. Establish an ocean acidification information exchange. Designate the entity responsible for coordinating the next steps in the federal response to ocean acidification. We provided a draft of this report for review and comment to the Executive Office of the President; Departments of Agriculture, Commerce, Defense, the Interior, and State; EPA; NASA; and National Science Foundation. None of the agencies commented on our recommendations or findings. NOAA, on behalf of the Department of Commerce, and the Departments of Agriculture and the Interior provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Executive Office of the President; the Secretaries of Agriculture, Commerce, Defense, the Interior, and State; the Administrators of EPA, NASA, and NOAA; the Directors of the Bureau of Ocean Energy Management, National Science Foundation, U.S. Fish and Wildlife Service, and U.S. Geological Survey; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The department’s National Institute of Food and Agriculture chairs the Interagency Working Group on Aquaculture, which works to increase the overall effectiveness and productivity of federal aquaculture research and assistance programs. The working group, previously known as the Joint Subcommittee on Aquaculture, was created by statute and operates under the Office of Science and Technology Policy’s National Science and Technology Council. marine and coastal national wildlife refuges.considering establishing coral reefs located in national wildlife refuges, which are often in remote areas and experience little human disturbance, as “sentinel sites” where the service can monitor the effects of ocean acidification. In addition, it is U.S. Geological Survey. The Geological Survey has researched ocean acidification and its effects by collecting data on ocean chemistry at different locations and studying the effects of acidification on certain species. For example, one of the areas the agency has focused on is the West Florida Shelf in the Gulf of Mexico, where it has studied spatial and temporal variations in carbon chemistry and the effects of ocean acidification on the growth of calcifying organisms. The agency, in conjunction with the U.S. Coast Guard, also monitored ocean chemistry in the Arctic Ocean from 2010 through 2012, documenting that about 20 percent of the area was undersaturated with respect to aragonite, according to an agency official. U.S. Navy. The Navy has monitored research on ocean acidification conducted by others to assess any potential implications for naval operations. One implication for naval operations described in the research and monitoring plan is the potential for ocean acidification to threaten the food supply in areas of the world that are heavily dependent on marine resources for food, which, in turn, could lead to increased political instability in those regions. The Navy has also helped fund research on the effects that ocean acidification might have on how sound travels through water, because of its potential impact on sonar systems, which are important to naval operations. Any disruption to food supply that may be caused by ocean acidification will disproportionately affect countries that are dependent on fish protein as a key element of their diet. The Food and Agriculture Organization of the United Nations reported in 2012 that fish accounted for 50 percent or more of animal protein consumed in some island and developing countries, whereas it accounted for only 16.6 percent of animal protein consumed globally. The agencies that were part of the interagency working group in 2013 estimated that between fiscal years 2010 and 2013 they collectively spent approximately $88 million on activities directly related to ocean acidification (see table 1). The expenditures shown for fiscal years 2010 and 2011 are estimates provided by the working group’s component Expenditures shown for agencies to the interagency working group.fiscal years 2012 and 2013 are preliminary estimates, according to the chair of the interagency working group, and were provided to us by agency officials. For all years, estimates do not include expenditures for actions that may have benefitted the federal response to ocean acidification but that were not made with ocean acidification specifically in mind (e.g., research on the global carbon cycle that provides information useful to ocean acidification researchers but that was funded as part of an agency’s climate change portfolio). In addition to the individual named above, Stephen D. Secrist (Assistant Director), Cheryl Arvidson, Christina Cantor, Jonathan Dent, Karen Howard, Timothy M. Persons, Anne Rhodes-Kline, Jeanette Soares, Sarah Veale, and Joshua Wiener made key contributions to this report.
Increasing carbon dioxide levels in the atmosphere and oceans are resulting in chemical changes referred to as ocean acidification. These changes may pose risks for some marine species and ecosystems, as well as for the coastal communities that rely on them for food and commerce. FOARAM requires various federal entities to take specific actions related to ocean acidification. GAO was asked to review federal efforts to address ocean acidification. This report discusses (1) the scientific understanding of the effects of ocean acidification; (2) the extent to which federal agencies have implemented FOARAM; and (3) additional actions, if any, that could be taken to advance the federal response to ocean acidification. To address these issues, GAO reviewed six summary reports on ocean acidification, other scientific studies, and agency documents, and interviewed key agency officials. Ocean acidification could have a variety of potentially significant effects on marine species, ecosystems, and coastal communities, according to six summary reports that GAO reviewed. The reports were developed by federal agencies and others and were based on extensive reviews of the scientific literature. The scientific understanding of these effects, however, is still developing, and uncertainty remains about their scope and severity. Potential effects of ocean acidification include: Reducing the ability of some marine species, such as oysters, to form shells or altering their physiology or behavior. These impacts could affect some species' growth and survival. Altering marine ecosystems, for example, by disrupting predator and prey relationships in food webs and altering habitats. Disrupting the economy or culture of some communities, for example, by harming coastal fishing and tourism industries. The National Science and Technology Council's Subcommittee on Ocean Science and Technology, in the Executive Office of the President, and several federal agencies have taken steps to implement the Federal Ocean Acidification Research and Monitoring Act of 2009 (FOARAM) but have yet to complete some of the act's requirements. For example, an interagency working group, which includes representatives from 11 agencies and is chaired by the Department of Commerce's National Oceanic and Atmospheric Administration, has been established. The working group has developed a research and monitoring plan outlining steps to advance the nation's understanding of, and ability to respond to, ocean acidification. However, the agencies involved have yet to implement several FOARAM requirements, including outlining the budget requirements for implementing the research and monitoring plan. Some agency officials told GAO that not providing budget estimates has prevented the agencies and Congress from accurately understanding the funding needed to implement the plan and how it compares with current funding levels. Further action could be taken to advance the federal response to ocean acidification. GAO's previous work on interagency collaboration has found that a variety of mechanisms can be used to implement efforts involving multiple federal agencies by helping to facilitate collaboration. One possible approach, recommended by the interagency working group, is to establish an independent national ocean acidification program office to coordinate the next steps in the federal response. The working group, however, has not established such an office because it has been unable to reach agreement on how it should be funded. Until greater clarity is provided on the entity responsible for coordinating the next steps in the federal response to ocean acidification, completing important actions, such as implementing the research and monitoring plan, will be difficult. GAO recommends the appropriate entities within the Executive Office of the President take steps to improve the federal response to ocean acidification, including estimating the funding that would be needed to implement the research and monitoring plan and designating the entity responsible for coordinating the next steps in the federal response. GAO provided a draft of this report for review and comment to the Executive Office of the President and the departments and agencies reviewed. None of the agencies commented on GAO's recommendations; several provided technical comments that were incorporated, as appropriate.
DOD officials have expressed concern that servicemembers are often the victims of predatory lending practices by certain types of lenders who typically lie outside the system of traditional financial institutions such as banks. These lenders offer alternative access to cash for consumers with low incomes or poor credit records, and generally do so without standard credit checks. The fees charged for these alternative loans are generally much higher than those charged by traditional financial institutions, and other terms and conditions of such loans are often unfavorable to the borrower. As a result, some federal, state, and consumer advocacy agencies have expressed concern that many of these alternative loans could include predatory practices. The most common of these loans include the following: Payday loans, according to the Federal Deposit Insurance Corporation, are small, short-term loans that borrowers promise to repay out of their next paycheck or deposit of funds. These loans typically have high fees and are often rolled over repeatedly, which can make the cost of borrowing—expressed as an annual percentage rate—extremely high. Rent-to-own loans, according to the Federal Trade Commission, provide immediate access to household goods (such as furniture and appliances) for a relatively low weekly or monthly payment, typically without any down payment or credit check. Consumers have the option of purchasing the goods by continuing to pay “rent” for a specified period of time—however; the effective cost of the goods may be two to three times the retail price. Automobile title pawns provide short-term loans to borrowers who give the lender the title to their car as collateral for the loan. Effective interest rates are generally very high. Tax refund loans provide cash loans against the borrower’s expected income tax refund. Senior DOD and service officials have noted that such loans may have associated predatory lending practices, which can be detrimental to servicemembers who choose these loans as a way to overcome immediate needs for cash. The fees for loans such as the payday loans provide a general indication of the loans’ potential detrimental financial effects on servicemembers’ finances. The Community Financial Services Association of America, a payday-advance trade association, which says that it represents more than half of the payday advance industry, developed a set of best practices for its member companies. Among other things, the association’s best practices limit the number of extensions for outstanding advances. Association representatives noted that borrowers select payday loans over other alternatives for a number of reasons. For example, in some instances, the officials stated that the individual may not have the good credit history required to borrow from a bank or credit union. In other instances, an individual might use a payday loan to avoid a bounced check fee, late payment penalty, or reconnection fees associated with the late payment of a utility bill. The Congressional Research Service estimated that the number of payday loan offices nationwide increased from approximately 300 in 1992 to almost 15,000 in 2002, and the total dollar volume of payday loans in 2002 was about $25 billion. The extent to which active duty servicemembers use consumer loans considered to be predatory and the effects of such borrowing are unknown, but many sources suggest that predatory lenders may be targeting servicemembers. While DOD has some data on servicemembers’ use of four types of loans, DOD is unable to quantify the extent to which these types of loans have associated predatory practices, the frequency of borrowing, the amounts borrowed, or the effects of the loans. Information from our focus groups, however, provided insights to some of these issues. Although DOD is unable to quantify usage and effects, consumer advocates, state government officials, DOD officials, and servicemembers in our focus groups indicated that military personnel are being targeted by some predatory lenders. DOD does not have comprehensive data for quantifying the extent to which servicemembers use consumer loans that are considered predatory in nature and the effects of such use on servicemembers’ finances or their units’ readiness. The only DOD-wide statistics on servicemembers’ use of loans are obtained from surveys. In the August 2004 DOD survey, 12 percent of servicemembers indicated that, during the last 12 months, they or their spouse had used at least one of the four specified types of financial loans that DOD says may have associated predatory practices. Seven percent of servicemembers indicated they (or their spouse) had obtained payday loans; 4 percent had obtained rent-to-own loans, 1 percent had obtained automobile title pawn loans, and 6 percent had obtained tax refund loans. While only 2 percent of the officers had used any of the four financial transactions, 14 percent of the junior and 13 percent of the senior enlisted servicemembers had used at least one such loan. Although not generalizable to all active duty servicemembers and their spouses, some of the more than 400 participants in our 60 focus groups reported encountering problems when they used the short-term consumer loans; while other servicemembers said such loans have positive elements such as being quick, easy, and obtainable even if servicemembers had a bad credit history (see app. II for example comments). DOD’s efforts to assess predatory lending are hampered by the lack of a precise definition of predatory lending—a problem shared with other organizations attempting to quantify the use and effects of predatory loans. The lack of precision in the definition is found in DOD’s acknowledgement that the four types of loans may (i.e., not always) contain predatory lending practices, but other DOD statements state that payday lending is predatory, without including a qualifier. Imprecision in the definition and the way the questions are asked on surveys can affect results. For example, the percentage of servicemembers who reported using the various types of loans may be larger than the percentage of servicemembers who would have said they obtained a predatory loan, had the question been oriented somewhat differently. Other important issues not addressed in the survey but needed to quantify the extent and effects of borrowing from lenders that may use predatory lending practices include questions on the frequency of use, amounts borrowed, negative and positive effects of the loans, and any problems encountered during the transactions. DOD, service, and installation officials maintained that obtaining data on the use and effects of predatory lending are also hampered because of privacy considerations and the reluctance of most servicemembers to discuss their financial difficulties with their command. Installation officials told us that they are likely to learn about servicemembers’ use of the previously cited types of loans only when a situation has become serious enough to warrant creditors contacting the command or servicemembers contacting either financial counselors or legal assistance attorneys on the installations. Because of general privacy concerns, it is unlikely that all contacts with attorneys and counselors could be provided in an installation-level statistic. According to some consumer advocates, state officials, DOD officials, and military personnel, servicemembers are targeted by predatory lenders. A 2003 National Consumer Law Center report stated that junior enlisted servicemembers are targeted because they have a relatively low but secure income (with military basic pay that currently ranges from about $1,200 to $1,900 per month) and tend to be young and financially inexperienced. The report further suggested that deploying servicemembers are more vulnerable targets than their nondeploying peers because the former often must get their finances in order quickly and leave behind spouses who may not know how to manage the family’s finances. The report noted several financial practices that it considered “consumer scams” aimed at servicemembers. These included payday loans, rent-to-own transactions, and automobile title pawns. Some state officials have also suggested that payday lenders—a type of predatory lending according to DOD—target servicemembers. For example, the Georgia General Assembly has recently determined as part of its new antipayday lending legislation that despite its illegality, payday lending was growing in Georgia and having an adverse effect on servicemembers and others in the state. Similarly, the Florida governor’s 2004 statement to the Committee on Senate Armed Services, Subcommittee on Personnel, noted that Florida had regulated activities of payday loan and check cashing businesses that traditionally target servicemembers. In 2004, the Under Secretary of Defense for Personnel and Readiness posted an issue paper on its Web site to the National Governors Association that addressed payday lending and other personnel issues. Regarding payday lending, the Under Secretary stated that “Payday lending practices have proven to be detrimental to servicemembers who have chosen these loans as a way of overcoming immediate needs for cash…Statutes that cap small loan interest rates and establish usury ceilings help protect vulnerable servicemembers from the usury nature of payday loans and their associated predatory practices.” According to a 2004 Consumer Federation of America study, 15 states prohibit or limit payday lending through laws on interest rate caps for small loans, usury laws, or specific prohibitions for check cashers. We did not independently verify that these 15 states, in fact, do prohibit this activity, nor did we review laws in the other 35 states. Figure 1 shows these 15 states identified by the Consumer Federation of America, along with information on the number of active duty servicemembers on installations in each state. Even in those states that prohibit or otherwise regulate payday loans, servicemembers may be able to obtain such loans. Another Consumer Federation of America report noted that a growing number of Web sites deliver small loans, with some lenders using anonymous domain registrations or residing outside the United States. DOD and servicemembers are underutilizing the tools that DOD has for curbing predatory lending practices and the effects of such lending. While commanders at some installations we visited have changed the unfair practices of businesses by using recommendations from Armed Forces Disciplinary Control Boards to place or threaten to place businesses on off- limits lists to servicemembers, boards at other installations we visited rarely met or made such recommendations. Although installation newspapers appear to meet current disclaimer requirements by including a statement noting that the U.S. government does not endorse a business’ products or services, the advertisements may lead to confusion for readers because the disclaimers are not prominently printed or located near the advertisement. Additionally, servicemembers typically have not made full use of free DOD-provided legal assistance before signing contracts and other financial documents, but they sometimes use the assistance after financial problems develop. Recently, DOD has sought to expand the tools available for curbing the use and effects of predatory lending practices by exploring additional on-installation alternatives to payday loans. Armed Forces Disciplinary Control Boards and the recommendations that they make to an installation commander to place businesses off-limits to servicemembers can be effective tools for avoiding or correcting unfair practices, but data gathered during some of our site visits to the various installations revealed few times when boards were used to address predatory lending practices. For example, at three of the installations, the board had not met for more than a year and, therefore, may not have adequately addressed whether actions were needed against businesses whose practices negatively affected servicemembers. The board at Fort Bragg, North Carolina, had not met for over a year and the board at Fort Stewart, Georgia, had not met since 2003 because the Directors for both boards had deployed to Iraq. The board at Fort Drum, New York, had not met in about 4 years because the board’s Director did not see a reason to convene. He was not aware of two recent, lending-related lawsuits filed by the New York Attorney General that had connections with Fort Drum servicemembers. The Attorney General settled a lawsuit in 2004 in behalf of 177 plaintiffs—most of whom were Fort Drum servicemembers—involving a furniture store that had improperly garnished wages pursuant to unlawful agreements it had required customers to sign at the time of purchase. The Attorney General filed a lawsuit in 2004 involving catalog sales stores. He characterized the stores as payday-lending firms that charged excessive interest rates on loans disguised as payments toward catalog purchases. Some of the servicemembers and family members at Fort Drum fell prey to this practice. The Attorney General stated that he found it particularly troubling that two of the catalog stores were located near the Fort Drum gate. The Garrison Commander at Fort Drum and a representative of the board said that had they known about these cases, they would have considered placing the businesses on the off-limits list. Legal assistance attorneys at Fort Drum were, however, aware of the legal actions by the New York Attorney General. By not making full use of the boards, commanders may not be doing all they can to help servicemembers avoid businesses that employ predatory practices. According to officials at the installations we visited, the boards might not be used as a tool for dealing with predatory lenders for a variety of reasons. First, high deployment levels have resulted in commanders minimizing some administrative duties, such as convening the boards, in order to use their personnel for other purposes. Second, as long as the lenders operate within state laws, the boards may determine they have little basis to recommend placing or threatening to place businesses on the off-limits lists. Third, significant effort may be required to put businesses on off- limits lists. At the installations we visited, the boards’ composition included representatives from functional areas like public works, family community services, legal counsel, safety, and public affairs. In contrast, businesses near two other installations we visited changed their lending practices after boards recommended that commanders place or threaten to place businesses on off-limits lists. The Commander of the Navy Region Southwest’s board identified actions that were based on the board’s recommendations against businesses committing illegal acts or taking unfair advantage of servicemembers. For example, in October 2002, a company was placed off-limits because it represented itself as a government agency when arranging loan-repayment allotments with servicemembers, threatened debtors with court-martial for nonpayment, and wrote loans that had interest rates of 60 percent. Similarly, the board at Camp Lejeune, North Carolina, threatened to take action against a lender that was charging 33.1 percent interest and requiring servicemembers to waive their rights set forth by the Servicemembers Civil Relief Act. The business avoided being placed on the installation’s off-limits list by terminating two employees and changing some of its business practices. In some instances, DOD is not providing a clear message about whether it endorses advertisers in official installation newspapers. Some servicemembers in our focus groups said they were confused about whether the military endorses the businesses that advertise in installation newspapers, and the confusion could lead servicemembers to use a type of business that DOD has labeled as potentially having predatory lending practices. Earlier, a 2003 Army publication stated that payday loan advertisements appear in official and unofficial military publications, and readers often incorrectly assume that military officials have approved the businesses and their claims. A DOD instruction requires installation publications to run disclaimers warning readers that advertisements do not constitute endorsement by the U.S. government. The instruction also requires public affairs staffs to oversee the appropriateness of advertisements in installation publications. Among other things, the public affairs staff is to review advertisements and identify any that may be detrimental to DOD personnel or their family members. If an advertisement is found to be detrimental, the public affairs staff is to take steps to either have the advertisement removed by the publisher or report the situation to the installation commander who can act to preclude distribution of the publication on the installation. Servicemembers’ confusion about businesses’ advertisements may have multiple causes. First, readers may not see the advertising disclaimer. We reviewed 14 installation newspapers and found that all of them contained a disclaimer; however, we also observed that the disclaimers were typically (1) included only once in the newspaper, (2) listed with other administrative notices such as statements identifying the publisher and the availability of advertised items, and (3) located remotely from many of the advertisements. Second, advertisements for some types of businesses may run contrary to official DOD statements about the use of those businesses. Servicemembers participating in our focus groups said they were confused because DOD officials and information provided during PFM training warned against using payday lenders but such lenders were allowed to advertise in installation newspapers. We observed two such advertisements for a payday lender during our review of the 14 installation newspapers, and PFM program managers wrote comments about this issue when responding to a GAO survey of all PFM managers. Third, there is confusion about which businesses do and do not use predatory lending practices. For example, the PFM program manager at one installation identified a particular car financing business as predatory, but the PFM program manager at another installation sometimes directs servicemembers to this same business when they have had past credit problems that limit their loan options. Fourth, legal assistance attorneys on some of the installations we visited told us that lenders and other businesses are free to advertise in the newspapers. A potential negative effect of the confusion regarding whether businesses are approved and endorsed by the installation is that servicemembers may use types of businesses that DOD policy officials have determined to be predatory, rather than seeking assistance through alternatives such as military relief/aid societies identified by the installation PFM program manager and staff. Servicemembers do not take full advantage of free DOD-provided legal assistance on contracts and other financial documents. Legal assistance attorneys at the 13 installations we visited stated that servicemembers rarely seek their assistance before entering into financial contracts for goods or services such as purchasing cars or lifetime film developing. The attorneys said that servicemembers are more likely to seek their assistance after encountering problems such as the following selected examples: Used car dealers offered low interest rates for financing a vehicle, but the contract stated that the interest rate could be converted to a higher rate later if the lender did not approve the loan. Servicemembers were later called to sign a new contract with a higher rate. By that time, some servicemembers found it difficult to terminate the transaction because their trade-in vehicles had been sold. Used car dealers refused to allow servicemembers to take their contracts to a legal assistance attorney for review. In one such instance, a servicemember signed a contract to pay $30,000 for a car with a blue book value of $12,000. A company used car titles as collateral on loans and required servicemembers to provide an extra set of keys to the cars so that they could be easily repossessed if the loans were not paid. This type of transaction can result in triple-digit interest. During our interviews, legal assistance attorneys said they provide preventive briefings to incoming and deploying servicemembers to address financial issues such as car buying, payday loans, and debt management during deployment. In some cases, they might take actions to assist servicemembers who have financial problems. Depending on the circumstances, they may represent servicemembers in local court involving consumer cases that affect the military community. In addition, while most legal assistance attorneys do not represent servicemembers in bankruptcy cases, they may provide servicemembers with information on bankruptcy, advice about whether filing is appropriate, and a reference to an off- installation civilian attorney. Legal assistance attorneys, as well as other personnel in our interviews and focus groups, noted reasons why servicemembers might not take greater advantage of the free legal assistance before entering into business agreements. They stated that junior enlisted servicemembers who want their purchases or loans immediately may not take the time to visit the attorney’s office for such a review. Additionally, the legal assistance attorneys noted that servicemembers feared information would get back to the command about their financial problem and limit their career progression. DOD, service, and installation officials are exploring on-installation alternatives to payday loans for those servicemembers with financial problems. In 2004, DOD said it surveyed approximately 150 defense credit unions and received responses from 48. Of those responding, which may not be representative of all defense credit unions due to the low response rate, 29 credit unions said that they offer an alternative to payday lending. The alternatives, which can be shared with other on-installation credit unions and banks as well as PFM program managers and command financial counselors, included (1) low-cost, short-term lines of credit; (2) short-term signature loans or small unsecured signature loans; and (3) availability of funds 2 days before the servicemember’s normal pay date. Some of the PFM program managers at the 13 installations we visited had also worked with on-installation credit unions and banks to obtain payday loan alternatives, which included special loan programs or overdraft protection of up to $500 for customers with “less-than perfect” credit histories. One credit union that we visited advertised a loan alternative called QuickCash, which had an annual percentage rate of 18 percent. To use QuickCash, servicemembers were required to join the credit union, apply for the loan, and have the repayment deducted from their account the following pay period. Some of the on-installation credit unions also offer seminars and training to assist servicemembers in finding lending alternatives. Other alternatives to payday loans include pay advances and military relief/aid society grants and no interest loans to servicemembers. Some servicemembers in our focus groups stated that they would not use these types of installation-related alternatives because the alternatives take too long, are intrusive because the financial institution or relief/aid society required in-depth financial information in the loan or grant application, or may be career limiting if the command found out the servicemembers were having financial problems. The Army Emergency Relief Society has attempted to address the time and intrusiveness concerns with its test program, Commander’s Referral, for active duty soldiers lacking funds to meet monthly obligations of $500 or less. After the commander approves the loans, the servicemembers can expect to receive funds quickly. Noncommissioned officers in our individual interviews and focus groups said the program still does not address servicemembers’ fears that revealing financial problems to the command can jeopardize their careers. Although we have cited examples where installation commanders changed the predatory practices of businesses by adding or threatening to add the lenders to an off-limits list, other installation commanders we visited have made only limited use of their Armed Forces Disciplinary Control Board for such purposes. The fact that some boards have not met for a year or more seems to run contrary to DOD, service, and installation efforts to curb the use and effects of predatory lending practices. As we have discussed, failure to utilize this valuable tool fully and appropriately for curbing unfair or illegal commercial or consumer practices can have negative, but difficult-to-quantify, consequences on servicemembers’ finances as well as unit morale and readiness. Furthermore, although military installations appear to be meeting current requirements regarding disclaimers for advertisements in installation publications, the location of the disclaimer has resulted in unclear messages to some servicemembers about whether inclusion of certain advertisements constitutes approval or endorsement of the business by DOD. In addition, some servicemembers have been confused when the content of some advertisements is contrary to official DOD statements regarding the use of lenders who may use predatory lending practices. This confusion is particularly problematic because it may harm DOD’s efforts to reduce the use and effects of predatory lending practices. We are making the following two recommendations to the Secretary of Defense: To improve DOD’s ability to curb the use and effects of predatory lending practices, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to amend existing regulations to require installation commanders to convene the Armed Forces Disciplinary Control Boards at least semiannually to investigate and make recommendations to commanders on matters related to eliminating conditions which adversely affect the health, safety, morals, welfare, morale, and discipline of the Armed Forces, to include servicemembers’ use of lenders who may use predatory lending practices. To ensure DOD provides servicemembers a clear message about whether it endorses advertisers in official installation newspapers that may use predatory lending practices, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Public Affairs to clarify the regulations pertaining to advertisements in installation publications by requiring disclaimers to be more prominent and taking steps to ensure advertisements reflect stated DOD policies regarding what it considers to be predatory lending. In written comments on a draft of this report, DOD concurred with our recommendation to clarify regulations pertaining to advertisements in installation publications and partially concurred with our recommendation to amend regulations to require at least semiannual meetings of the Armed Forces Disciplinary Control Boards. In its comments, DOD noted that it is in the initial stages of staffing and coordinating changes to the Armed Forces Disciplinary Control Boards’ joint regulations and will take two actions—require boards to meet four (instead of two) times a year and direct that businesses on the off-limits list for one service be off-limits for all services. Although DOD’s comments dealt primarily with the issue of payday lending, the intent of our recommendation was to address all types of consumer predatory lending encountered by servicemembers. Moreover, DOD’s actions will go even further than our recommendation suggested. DOD also noted that the boards’ process would be an ineffectual deterrent against payday lenders for several reasons. For example, it stated that the boards’ process would be ineffectual because of the difficulty in providing adequate oversight of all payday lending businesses and noted that installation commanders may have to develop criteria outside of state statutes for the 35 states where payday lending is legal. Our draft report had already noted that boards may have little basis for recommending or threatening to place businesses on an off-limits list when lenders operate within state laws. Our recommendation will (1) require the boards to meet regularly and (2) provide installation commanders additional focus and oversight into conditions that may adversely affect servicemembers on their installations. Implementing our recommendation does not require installation commanders to monitor all payday lending businesses; instead, it is intended to provide commanders with a routine process for reviewing and taking appropriate action against those lenders that adversely affect servicemembers on the commanders’ installation. DOD’s comments are reprinted in appendix III. DOD also provided technical comments, which we incorporated in the final report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will send copies of the report to the Secretary of Defense and interested congressional committees. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-5559 (stewartd@gao.gov) or Jack E. Edwards at (202) 512- 8246 (edwardsj@gao.gov). Other staff members who made key contributions to this report are listed in appendix IV. In addressing the objectives of this engagement on predatory lending, we limited our scope to active duty servicemembers because we have previously issued a number of reports on the compensation, benefits, and pay-related problems of reservists. While performing our work, we visited 13 installations with high deployment levels, as identified by service officials (see table 1). During these site visits to installations in the United States and Germany, special emphasis was given to ascertaining the financial conditions of junior enlisted servicemembers because the Department of Defense (DOD) and service officials have reported that this subgroup is more likely to encounter financial problems. To address servicemembers use of consumer loans considered to be predatory in nature, we reviewed and analyzed laws, policies, and directives—such as the Servicemembers Civil Relief Act and DOD’s Financial Management Regulation 7000.14R, Volume7A. We also reviewed and analyzed lending-related findings and perspectives contained in publications issued by GAO, DOD, Congressional Research Service, Federal Reserve Bank of Philadelphia, Federal Deposit Insurance Corporation, Federal Trade Commission, state government officials, consumer groups (Consumer Federation of America and National Consumer Law Center), and an association that says it represents around 50 percent of payday lenders (Community Financial Services Association of America). We reviewed a 2004 Consumer Federation of America study, which cited 15 states that prohibit or limit payday lending through laws on interest rate caps for small loans, usury laws, or specific prohibitions for check cashers. We did not independently verify that these 15 states, in fact, do prohibit this activity, nor did we review laws in the other 35 states. We also contacted the Federal Trade Commission and ascertained that its Military Sentinel database has little information on servicemembers’ complaints against businesses. We interviewed DOD and service policy officials, as well as representatives of consumer groups and a payday association. During our 13 site visits, we developed and used structured questionnaires for interviews with seven types of officials: installation leaders, personal financial management (PFM) program managers, command financial counselors, senior noncommissioned officers (pay grades E8 to E9), legal assistance attorneys, chaplains, and relief/aid societies. We used a structured protocol for conducting 60 focus groups with over 400 individuals who met in four homogeneous types of groups: junior enlisted servicemembers (pay grades E1 to E4), noncommissioned officers (pay grades E5 to E9), company-grade officers (pay grades O1 to O3), and spouses of servicemembers. In addition, we constructed, pretested, and administered a survey to participants in the focus groups to collect supplemental information that may have been difficult to collect in a group setting. We also obtained data from an August 2004 DOD-wide survey to assess its reliability and determine prevalence rates for four types of loans that DOD says may contain predatory practices. The August 2004 survey had a response rate of 40 percent. DOD has conducted and reported on research to assess the impact of this response rate on overall estimates. They found that, among other characteristics, junior enlisted personnel (E1 to E4), servicemembers who do not have a college degree, and members in services other than the Air Force were more likely to be nonrespondents. We have no reason to believe that potential non-response bias in the estimates, not otherwise accounted for by DOD’s research, is substantial for the variables we studied in this report. Therefore, we concluded the data to be sufficiently reliable to address our objectives. We found the data sufficiently reliable to address our objectives. This information was supplemented with information obtained from three group discussions with a total of 50 personnel affiliated with the PFM programs while they attended a November 2004 conference. To assess whether DOD was fully utilizing the tools that it has to curb the use and effects of predatory lending practices, we obtained information from the laws, policies, directives, and reports that were used to address servicemembers’ use of loans that DOD considered to be predatory in nature. DOD and service policy officials identified DOD’s primary tools for curbing the use and effects of predatory loans. These individuals also supplied their perspectives on how fully utilized those tools were. Similarly, individual interviews and focus groups with others who supplied information on the question related to servicemembers’ use of consumer loans also provided their perspectives on how fully the tools were used, the effects of underutilizing the tools, and possible reasons that some tools were not used more fully. In addition, we examined official installation newspapers to determine whether they contained disclaimers and advertisements for loans that DOD officials say may contain predatory practices. This examination of newspapers was just a cursory review and was not based on any sort of random sampling. Interviews with representatives of on-installation credit unions and national military relief/aid societies provided input about alternatives to payday loans. We performed our work from March 2004 through February 2005 in accordance with generally accepted government auditing standards. We held focus group sessions at the 13 military installations we visited during the course of this engagement to obtain servicemembers’ perspectives on a broad range of topics, including the impact of deployment on servicemembers’ finances and the types of lenders military families use, along with the personal financial management (PFM) training and assistance provided to servicemembers by the Department of Defense (DOD) and service programs (see app. I for a list of installations visited). Servicemembers who participated in the focus groups were divided into three groups: junior enlisted personnel (pay grades E1 through E4), senior enlisted personnel (pay grades E5 through E9), and junior officers (pay grades O1 through O3). Although we requested to meet with servicemembers who had returned from a deployment within the last 12 months, some servicemembers who had not yet deployed also participated in the focus groups. At some installations, we also held separate focus groups with spouses of servicemembers. Most of the focus groups consisted of 6 to 12 participants. We developed a standard protocol, with seven central questions and several follow-up questions, to assist the GAO moderator in leading the focus group discussions. The protocol was pretested during our first installation visit, and, after minor changes, was used at the remaining 12 installations. During each focus group session, the GAO moderator posed questions to participants who, in turn, provided their perspectives on the topics presented. We essentially used the same questions for each focus group, with some slight variations to questions posed to the spouse groups. Questions and sample responses are listed below. We sorted the 2,090 summary statements resulting from the 60 focus groups into categories of themes through a systematic content analysis. First, our staff reviewed the responses and agreed on response categories. Then, two staff members independently placed responses into the appropriate response categories. A third staff member resolved any discrepancies. In this report, we provide focus group participants’ statements for only question 5—the one that asked participants about their experiences with predatory lenders. Before the question was asked we attempted to provide participants with a general context for answering the question by reading the following information: “Now we would like to talk about specific problems with predatory lenders. These include lenders that charge excessive fees and interest rates and those that lend without regard to borrowers’ ability to repay—usually lending to those with limited income or poor or no credit. Some payday lenders and fast checking places that charge high interest rates may fall into this category. Or a predatory lender could be a lender that commits outright fraud or deception—for example, falsifying documents or intentionally misinforming the borrowers about the terms of a loan, which may occur with unscrupulous car dealers.” The themes and the number of installations for which a statement about a theme was cited are provided in italics below. Also, two examples of the statements categorized in the theme are provided. Only those themes cited at a minimum of three installations are presented. The number of installations—rather than the number of statements—is provided because (1) the focus of this engagement was on DOD-wide issues and (2) a lengthy discussion in a single focus group may have generated numerous comments. 5. What kinds of experiences have your fellow servicemembers or subordinates had with predatory lenders? A. Other issue regarding experiences with predatory lenders (N = 13) Example: Businesses will tell young Marines that they can buy an item for a certain amount each month. They keep the Marine focused on the low monthly payments and not on the interest rate or the terms of the loan. Example: Some Marines feel that a business would not take advantage of them because they are in the military. This leads them to be more trusting of the local businesses than they should be, which in turn, leads the businesses to take advantage of them. B. Predatory lender used—car dealers (N = 11) Example: Most of the participants stated that the car dealerships around base were the worst predatory lenders because they charge high interest rates and often provide cars that are “lemons.” They said that most of the sales people at the dealerships are former personnel who know how to talk to servicemembers to obtain their trust. Servicemembers do not expect this. Example: One captain had a Marine in his unit who signed a contract with a car dealer for a loan with a 26 percent interest rate. The captain took the Marine to the Marine Credit Union and got him a new loan with 9.5 percent interest. C. Predatory lender used—payday lenders (N = 10) Example: A master sergeant got caught in the check-cashing cycle. He would write a check at one payday lender in order to cover a check written at another lender the previous week. Example: One participant shared that when he was a younger Marine he got caught up with a payday lender. The problem did not resolve itself until he deployed and was not able to go to the lender anymore. D. Reason for using predatory lender—get fast cash and no hassle (N = 10) Example: People use payday lenders because they are quick and easy. All soldiers have to do is to provide their leave and earnings statement and they get the money. Example: Most of the participants say they know people that have used a payday lender and those soldiers use them because they have bad credit and can get quick cash. E. Predatory lender targeting—close proximity and clustering around bases (N = 9) Example: It is almost impossible to be unaware of lenders and dealerships because many are clustered in close proximity to the installation. They also distribute flyers and use pervasive advertising in local and installation papers. Example: The stores and car lots near the installation have signs that say “E1 and up approved” or “all military approved” to get the attention of the military servicemembers. F. Command role when contacted by creditors (N = 8) Example: The noncommissioned officers sometimes offer to go with the junior enlisted to places like car dealers; but, the young soldiers do not take them up on these offers. Example: One participant said that debt collectors do call his house and the command. He noted that one lender called him nine times in one day and his chief petty officer eventually asked the lender to stop harassing his sailor. G. Predatory lender targeting—advertising in installation/local newspaper (N = 7) Example: Soldiers are being targeted by predatory lenders in a variety of ways; for example, flyers are left on parked cars at the barracks, advertising is present at installation functions, and words such as “military” are used on every piece of advertising to make the servicemember believe that the company is part of or supported by the military. The servicemember would normally trust lenders associated with the military. Example: Most predatory lenders have signs that say “Military Approved” or commercials that say the same thing or “E1 and above approved.” H. Reason for using predatory lender—urgent need (N = 6) Example: Many soldiers use payday lenders because they are in a bind for money and they know these lenders can provide quick cash. Example: Soldiers will use a payday lender because they need money for a child, the kids, the house payment, etc. In many cases, it does not matter why they need it; they just need it. So, they go where they can get cash the fastest and the easiest way possible. I. Predatory lender used—furniture/rent-to-own (N = 6) Example: One of the participants stated that he had obtained a loan to purchase a new washer and dryer. The loan had a 55 percent interest rate and the appliances cost a lot more than they should have. Example: Rent-to-own businesses are widely used by soldiers. One soldier ended up paying $3,000 for an $800 washer and dryer set. J. No problem with predatory lenders (N = 5) Example: There have not been any problems with predatory lenders lately. The state of Florida has been using legislation to shut them down. Example: The participants said that they had never encountered an officer who had to use payday lenders or predatory lenders. According to the participants, most of the officers’ problems come when they have a bitter divorce. K. Reason for using predatory lender—other reasons (N = 5) Example: One soldier stated that his credit was so bad that he had no other option but to use high interest rate lenders. He stated that, “I have bad credit and I will always get bad credit.” Example: One participant said he has several friends that use payday lenders because they are E1s or E2s and don’t make much money. L. Predatory lender targeting—employing former military members (N = 4) Example: The people running and working for the predatory businesses are usually former military servicemembers who use their knowledge of the system to take advantage of Marines. Example: Many times the predatory lenders are veterans, former Marines, or retirees. Using these types of people gives the younger Marines a false sense of trust and then the lenders will take advantage of the servicemember or stab the servicemember in the back. M. Reason for using predatory lender—command will not know financial conditions (N = 3) Example: When a soldier needs money, a payday loan can be used without notifying the chain of command. Any form of assistance from the Army requires a soldier to obtain approval from a dozen people before they can get any money. Example: The most significant reason that people use payday lenders is privacy. The spouses stated that to obtain assistance through the Air Force, you must use the chain of command to obtain approval. By doing so, everyone in the unit will know your business. In addition to the individual named above, Leslie Bharadwaja, Alissa Czyz, Marion A. Gatling, Gregg Justice, III, David Mayfield, Brian Pegram, Minette Richardson, Terry Richardson, and Allen Westheimer made key contributions to this report. Military Personnel: More DOD Actions Needed to Address Servicemembers’ Personal Financial Management Issues. GAO-05-348. Washington, D.C.: April 26, 2005. Credit Reporting Literacy: Consumers Understood the Basics but Could Benefit from Targeted Educational Efforts. GAO-05-223. Washington, D.C.: March 16, 2005. DOD Systems Modernization: Management of Integrated Military Human Capital Program Needs Additional Improvements. GAO-05-189. Washington, D.C.: February 11, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Military Personnel: DOD Needs More Data Before It Can Determine if Costly Changes to the Reserve Retirement System Are Warranted. GAO- 04-1005. Washington, D.C.: September 15, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-911. Washington, D.C.: August 20, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-990T. Washington, D.C.: July 20, 2004. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: DOD Has Not Implemented the High Deployment Allowance that Could Compensate Servicemembers Deployed Frequently for Short Periods. GAO-04-805. Washington, D.C.: June 25, 2004. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-413T. Washington, D.C.: January 28, 2004. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Military Personnel: DFAS Has Not Met All Information Technology Requirements for Its New Pay System. GAO-04-149R. Washington, D.C.: October 20, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002.
The Department of Defense (DOD) has expressed concerns about servicemembers' use of predatory consumer loans as well as their overall financial conditions. "Predatory lending" has no precise definition but describes cases where a lender takes unfair advantage of a borrower, sometimes through deception, fraud, or terms such as very high interest or fees. Serious financial problems can adversely affect unit morale and readiness as well as servicemembers' credit history and military career. DOD has tools such as off-limits lists to help curb the use and effects of predatory loans. GAO answered two questions: (1) To what extent do active duty servicemembers use consumer loans considered to be predatory in nature? and (2) Are DOD and active duty servicemembers fully utilizing the tools that DOD has to curb the use and effects of predatory lending practices? The extent to which active duty servicemembers use consumer loans considered to be predatory and the effects of that borrowing are unknown. The only DOD-wide data come from surveys. In a 2004 survey, 12 percent of servicemembers said they or their spouse had used, during the last 12 months, at least one of four types of loans: payday, rent-to-own, automobile title pawn, or tax refund, which DOD says can often be associated with predatory lending practices. DOD is unable to quantify the extent to which the loans have associated predatory practices, the frequency of such borrowing, the amounts borrowed, or the effects of the loans. Although not generalizable, participants in GAO's 60 focus groups at 13 bases in the United States and Germany identified problems resulting from the use of short-term consumer loans, but other participants described the loans as quick, easy, and obtainable by servicemembers with bad credit. Privacy concerns and the reluctance of servicemembers to reveal financial problems make it difficult to quantify the use and effects of predatory lending. DOD and active duty servicemembers are not fully utilizing DOD's tools for curbing the use and effects of predatory lending practices. At some of the installations that we visited, the Armed Forces Disciplinary Control Board--a panel that can recommend to an installation commander that a business be placed off-limits to servicemembers--had not met in over a year. Fort Drum's board, for example, had not met in about 4 years, even though the New York Attorney General had filed two lending-related lawsuits against businesses on behalf of servicemembers and some of their family members at Fort Drum. DOD officials told us the reasons for boards not meeting or making recommendations include high deployment levels and the effort required to place a business on an off-limits list. Other commanders effectively changed businesses' predatory practices by using their board's recommendations to place or threaten to place the businesses off-limits. In addition, DOD is not always providing a clear message regarding advertising in installation publications. Participants in GAO's focus groups said they were confused because DOD-provided financial management training (described in our 2005 report, Military Personnel: More DOD Actions Needed to Address Servicemembers' Personal Financial Management Issues) warned them against using payday lenders but some installation newspapers carried advertisements for such businesses. These problems occur even though a DOD instruction requires (1) a disclaimer indicating that the advertisement does not constitute endorsement by the U.S. government and (2) a review by public affairs staff to determine if the advertisement might be detrimental to servicemembers. Our review of some installation newspapers showed possible reasons for the confusion; the disclaimers were often not prominently displayed or were located away from the advertisements. DOD also offers servicemembers free legal review of contracts and other financial transactions, but servicemembers often do not use the reviews until problems result. Recently, DOD began exploring additional on-installation alternatives to payday loans.
While income in retirement varies widely by source, Social Security benefits are the foundation of income for nearly all retiree households. In aggregate, Social Security is the largest source of retirement income for households with someone aged 65 or older, but other financial assets such as pension income from DB and DC plans, private savings, and assets such as home equity are important sources of retirement income for many. (See fig. 1.) In 2008, the most recent year for which data were available, among households with someone aged 55 to 60, the median net wealth for the middle quintile of net wealth was $339,000. The median household income for the middle net wealth quintile was about $70,000 in the preceding year, according to the Health and Retirement Study. (See app. II.) Earnings from work can be an important source of income for some households with a member aged 65 or older because, for example, a spouse younger than 65 may be working. Yet many people aged 65 or older also work. In 2010, 29.1 percent of people aged 65 to 69 worked at least part-time and 6.9 percent of people aged 75 or older were employed. Social Security benefits provide annually inflation-adjusted income for life—and in 2008 were on average the source of 64.8 percent of total income for recipient households with someone aged 65 or older. Under changes legislated in 1983, the retirement age for an unreduced benefit (the full retirement age) is gradually increasing from age 65, beginning with retirees born in 1938, and will reach age 67 for those born in 1960 or later. Despite these changes, the cost of Social Security benefits is projected to exceed sources of funding, and the program is projected to be unable to pay a portion of scheduled benefits by 2036. In 2010, for the first time since 1983, the Social Security trust funds began paying out more in benefits than they received through payroll tax revenue, although trust fund interest income more than covers the difference, according to the 2011 report of the Social Security trust funds’ Board of Trustees. However, changes to Social Security could eliminate or reduce the size of this projected long-term shortfall. At retirement, DB plan participants are eligible for a specified payment for life (either immediately or deferred, and with or without benefits for a surviving spouse), but some DB plans also give participants a choice, sometimes a difficult choice, to forego a lifetime annuity and instead take a lump sum cash settlement (distribution) or roll over funds to an IRA. DC participants face a number of difficult choices regarding their account balances, such as leaving money in the plan, purchasing an annuity, or transferring or rolling over their balance into an IRA. Employers who sponsor qualified plans and enable departing participants to receive lump sum distributions must also give participants the option to have these amounts directly rolled over into an IRA or another employer’s tax- qualified plan. Workers entering retirement today typically face greater responsibilities for managing their retirement savings than those who retired in the past. Social Security continues to provide a foundation of inflation-adjusted income for life, but fewer retirees today have defined benefit plans providing lifetime income. DC plans have become much more common and they generally do not offer annuities, so retirees are left with increasingly important decisions about managing their retirement savings. Participants in DB plans also face similar decisions when the plan offers a lump sum option, including not only whether to take the annuity or lump sum, but decisions about managing these savings if a lump sum is elected. For households with someone aged 65 or older with income from assets, such as interest and dividends, the estimated median amount of asset income for households in the third (middle) income quintile was $1,022 in 2008. For those in the highest income quintile the median was $8,050. Financial assets provide income, but can also provide flexibility to draw down funds as needed during retirement. For workers with a self-directed lump sum or other retirement savings, the money can be taken in periodic distributions for which there are strategies to help reduce the chance that a retiree does not outlive his or her money. For example, retirees could draw down a portion of their balance as a form of regular income to supplement Social Security and possibly DB pension income, investing the balance of savings in a diversified portfolio of mutual funds containing equities and fixed income securities. An alternative to self-managing periodic distributions from savings is to use one’s savings to purchase an immediate annuity from an insurance company that guarantees income for life. An immediate annuity can help to protect a retiree against the risk of underperforming investments, the risk of outliving one’s assets (longevity risk) and, when an inflation- adjusted annuity is purchased, the risk of inflation diminishing one’s purchasing power. Researchers have concluded that annuities have important benefits. For example, according to one association of actuaries, it is more efficient to pool the risk of outliving one’s assets than to self-insure by accumulating enough assets to provide enough income in case one lives to a very old age. Annuities provide income at a rate that can help retirees avoid overspending their assets and provide a floor of guaranteed income to prevent unnecessarily spending too little for fear of outliving assets, according to one association. Annuities can also relieve retirees of some of the burden of managing their investments at older ages when their capacity to do so may diminish, which may also make them susceptible to fraudulent sales. On the other hand, annuities may be inappropriate or expensive for people who have predictably shorter-than- normal life expectancies. Likewise, funds used to purchase immediate annuities are no longer available to cover large unplanned expenses. Also, immediate annuities that provide for bequests have higher costs. There is little consensus about how much income constitutes “enough” retirement income. Retirement income adequacy may be defined relative to a standard of minimum needs, such as the poverty rate, or to the level of spending households experienced during working years. Some economists and financial advisors consider retirement income adequate if the ratio of retirement income to preretirement income—called the replacement rate—is from 65 to 85 percent, although some retirees may need considerably less or more than this. Typically, however, retirees do not need to replace 100 percent of preretirement income to maintain living standards for several reasons. For example, retirees will no longer need to save for retirement and their payroll and income tax liability will likely fall. However, some researchers cite uncertainties about health and long-term care costs as reasons a higher replacement rate may be necessary. Table 1 shows replacement rates from Social Security benefits for low and high earners retiring in 2011, as well as the remaining amount of preretirement income from other sources necessary to achieve a 75 percent replacement rate. The Employee Retirement Income Security Act of 1974 (ERISA) is the primary statute governing private pension plans, including DB and DC plans. It seeks to protect the interests of employee benefit plan participants and their beneficiaries. Title I of ERISA, enforced by Labor, sets standards of conduct and requires accountability for the people who run or provide investment advice to plans, known as plan fiduciaries, and requires administrators to provide participants with certain disclosures, including periodic benefit statements as well as a summary plan description. Title IV of ERISA created the Pension Benefit Guaranty Corporation (PBGC) as a U.S. government corporation to provide plan termination insurance for certain DB pension plans that are unable to pay promised benefits. The Internal Revenue Service (IRS), under Title II of ERISA, and subsequent amendments to the Internal Revenue Code (the Code), generally is responsible for ensuring that plans meet certain requirements for tax qualification and for interpreting rules in Title I of ERISA regarding participation, vesting, benefit accrual, and minimum funding. Tax qualification enables employers to make tax-deductible contributions and the plan to earn interest on a tax-deferred basis. The tax advantages are intended to encourage employers to establish and maintain pension plans for their employees and advance other public policy objectives. For example, certain provisions of the Code set required minimum distributions from tax-deferred accounts, such as traditional IRAs and qualified plans, generally by April 1 in the year following the year in which the account holder reaches age 70 ½. These required minimum distributions help to ensure that account holders withdraw tax-deferred savings in retirement rather than accumulate savings for their estate. Once an individual withdraws his or her funds from either a DB or DC plan, a myriad of laws and regulations typically applies, depending on the investment decisions that the individual makes with those funds. In this instance, the individual is no longer a plan participant governed by ERISA, but is now essentially a retail investor governed by the laws and regulations that are pertinent to the particular product or asset in which he or she chooses to invest, and whether or not the funds are in an IRA. The different laws, regulations, and agencies that may come into play vary depending on the type of assets held. Various other federal and state agencies may regulate the investment or insurance products offered in pension plans or outside of plans on the retail market. For example, the Securities and Exchange Commission (SEC) regulates mutual funds, which are pooled investments in a portfolio of securities. In addition, certain types of annuities may be regulated by states, while other types may also be subject to federal securities laws and thus regulation by the SEC. For example, the SEC, among others, regulates variable annuities, including regulation of disclosure and sales practices. (See app. V on selected retirement income arrangements and products.) Insurance company annuities are generally regulated by state insurance departments, which set reserve requirements for the insurance companies offering annuities. More recently, states are also regulating sales and marketing practices and policy terms and conditions to ensure that consumers are treated fairly when they purchase insurance products and file claims. Although each state has its own insurance regulator and laws, the National Association of Insurance Commissioners (NAIC) provides a national forum for addressing and resolving major insurance issues and for allowing regulators to develop consistent policies on the regulation of insurance when consistency is deemed appropriate. State guaranty associations protect individuals with annuities up to specified limits in the event of insurer insolvency. If an insurance company becomes insolvent, guaranty associations assess solvent insurers to pay covered claims to affected policyholders. However, the associations are not state agencies, and their specified limits and the extent of coverage vary across states. Experts we interviewed tended to recommend that retirees draw down their savings strategically and systematically and that they convert a portion of their savings into an income annuity to cover necessary expenses or opt for the annuity provided by an employer-sponsored DB pension, rather than take a lump sum. The experts also frequently recommended that retirees delay receipt of Social Security benefits until they reach at least full retirement age. However, according to the experts, the combination of these strategies depends on an individual’s household circumstances, such as the standard of living the household seeks, its financial resources, and its tolerance for risks such as investment, inflation, and longevity risk. To learn what these experts recommend, we presented them with the financial profiles of five actual near-retirement households whose data we drew from the HRS as of 2008. We randomly selected households from the lowest, middle, and highest net wealth quintiles and households with varying types of pensions. See table 2 for a summary of their recommendations for each of these households and appendix III for a more detailed description of each household’s financial characteristics. Experts we interviewed recommend that when retirees use their savings or other assets to supplement other sources of retirement income, they draw down a portion of these reserves at a systematic rate. The drawdown rate should preserve some liquidity—immediately available funds—in case of unexpected events such as high medical costs. Such a drawdown should be part of a larger strategy that includes a certain amount of lifetime retirement income (such as Social Security, defined benefit, and annuity income). Drawdowns should be taken from assets invested in a broadly diversified portfolio comprised of medium exposure to stocks and the balance in bonds and cash. However, drawing down assets invested in stocks and bonds was recommended with the caveat that holding stocks and bonds leaves households exposed to the uncertainty in financial markets over an unknown number of retirement years. The systematic drawdown of financial assets can be based on a “smooth” and sustainable level of income throughout retirement or on a retiree’s remaining life expectancy. The smooth drawdown approach takes annual withdrawals based on assumptions about one’s life expectancy and future investment return. According to the Congressional Research Service (CRS), an approach based on a retiree’s remaining life expectancy could involve withdrawing amounts in light of the retiree’s remaining life expectancy in the year that a withdrawal occurs. One example, under the Code, would be required minimum distributions, which help to ensure that account holders withdraw tax-deferred retirement savings in retirement rather than for estate planning. The minimum distributions are calculated based partly on life expectancy. The experts we spoke to recommended a smooth systematic drawdown from retiree investments, but their recommendations varied on the rate of drawdown, depending on retirees’ acceptance of the risk of running out of money and the experts’ own assumptions about future investment returns. For example, those we spoke to recommended annual withdrawals of 3 to 6 percent of the value of the investments in the first year of retirement, with adjustments for inflation in subsequent years. These rates generally comport with CRS estimates for assuring a lifelong source of income. Using historical rates of investment return on a limited selection of stocks and bonds, CRS estimated that a drawdown rate of 4 percent on an investment portfolio with 35 percent U.S. stocks and 65 percent in corporate bonds would be 89.4 percent likely to last 35 years or more. (See additional probabilities from the CRS estimates in table 3.) Importantly, drawdown rates identified by CRS are based on historical rates of return, and there is no assurance that future investment returns will match historical returns. According to the experts we spoke to and literature we reviewed, another factor that can affect the success of drawdown strategies is the sequence of investment returns: if the drawdowns begin after the value of the investments has declined, the income drawn would deplete a greater proportion of the investments than if growth had occurred before the income were drawn. If, for example, annual investment returns on retirement savings are up 7 percent in the first year, then down 13 percent in the following year, and then up 27 percent, with subsequent returns throughout retirement a repetition of the first 3 years, the average return would be 7 percent. If the sequence of returns in the second and third year were reversed, holding all else constant, the average annual return would be the same; yet if withdrawals are made each year, savings would be depleted sooner with the first sequence of returns (see fig. 2). Experts we spoke to generally recommended lifetime retirement income from DB plans, when DB plans are available to workers, and income annuities, in conjunction with systematic drawdown of other savings, to provide a greater level of retirement income security. Furthermore, they frequently recommend retirees delay Social Security to boost inflation- adjusted lifetime retirement income. When the choice of taking a lump sum in exchange for lifetime retirement income from a DB plan is available, the experts we spoke with generally recommended that retirees take lifetime retirement income because it would reduce their exposure to investment and longevity risks. However, private sector DB plans do not typically provide inflation protection. Without inflation protection, the value of the income may be greatly diminished over a long retirement. For example, income of $1,000 per month in 1980 would have purchasing power closer to $385 a month 30 years later in 2009. When a DB income stream does not adjust with inflation, many experts recommended investing other savings in stocks and bonds, which have on average returned above the rate of inflation. Nevertheless, for retirees who want guaranteed income, experts we spoke to considered lifetime retirement income from DB plans preferable over purchasing an annuity with a lump sum distribution, since DB plans may be able to provide payments at a higher rate than is available through an insurance annuity outside of the plan. The experts we spoke with also recommended that retirees enhance their guaranteed income by purchasing an annuity with some limited portion of their savings. The income needed from an annuity depends, in part, on the amount of living expenses not covered by other sources of guaranteed income such as Social Security or a DB pension. For those that want a higher level of predictable income, an annuity can reduce the uncertainty that comes with managing a portfolio of investments and systematically drawing down income. The experts noted that retirees may have more difficulty managing a portfolio of investments as they age. With regard to our sample of near-retirement households, the experts we spoke to recommended that the middle quintile households purchase annuities with a portion of their savings, but that the lowest quintile household accumulate some precautionary cash savings before purchasing an annuity or investing in securities. Furthermore, they suggested that the two households in the highest quintile had sufficient resources to go without annuities, unless the individuals were very risk averse and felt the need for additional protection for longevity. With regard to the middle quintile household without a DB plan, experts specified that they should consider using a portion, such as half, of their $191,000 in financial assets to purchase an inflation-adjusted annuity. Based on current annuity rates, a premium valued at half of $191,000 would provide an additional $355 per month ($4,262 in the first year) until the death of the last surviving spouse, and include annual increases tied to the Consumer Price Index. A monthly payment in the first year at this rate would provide slightly more than the annual income provided by a 4 percent drawdown. By purchasing an annuity, this household would reduce its exposure to the risks inherent in a drawdown strategy—namely, the risks of longevity, inflation, and market volatility. This household would also have some liquidity by having kept half of its initial savings available to cover unexpected expenses or to leave for a bequest. For all the advantages of annuities, however, some of the experts we spoke to noted that there is commonly a psychological hurdle involved in the difficult decision to exchange a large principal payment for an unknown number of small monthly payments. In addition, some planners tempered their recommendations for annuities, given what they viewed as the credit risk of annuity insurance companies or the risk of defaulting on their obligation to make annuity payments. On the other hand, an economist and an actuary we spoke to—who do not work for insurance companies—maintain that the credit risk is small relative to the risks inherent in holding stocks and bonds. Annuities also carry some disadvantages with regard to estate and tax planning. Regarding a retiree’s estate, annuities are typically not refundable upon death, whereas any funds that remain with the deceased’s systematic drawdown strategy could be left to beneficiaries. With regard to taxes, the income from annuities purchased with nonqualified funds is taxed as ordinary income, whereas part of the investment return from a systematic drawdown strategy of nonqualified savings is often taxed at lower capital gains or dividend tax rates. Financial experts we spoke to recommended that retirees delay their receipt of Social Security benefits in order to increase the amount they receive from this guaranteed inflation-adjusted retirement income, particularly since Social Security benefits are the foundation of income for nearly all retiree households. Although, the experts cited factors to consider before choosing to delay Social Security benefits, such as one’s health and personal life expectancy and the availability of other sources of income. Under market conditions at the time of the drafting of this report, we found that by delaying Social Security benefits an individual can gain additional retirement income at a lower cost than from an immediate annuity. While individuals may choose reduced Social Security benefits at the early eligibility age of 62, the payments they will receive at full retirement age (age 66 for those born from 1943 to 1954) will be higher, and continue to increase incrementally the longer they wait, up to age 70. The total estimated amount of benefits collected by electing to delay receipt of benefits from age 62 up to age 70 is intended to be approximately actuarially equivalent, but determinations of actuarial equivalence at any particular time depend on assumptions as to current and projected interest and mortality rates. The amount of money that a retiree would forego by waiting to start benefits until age 66 is less than the amount needed to purchase an annuity that would provide the additional monthly income available by waiting until full retirement age. If, for example, a person collects $12,000 per year at age 62 and every year thereafter (with yearly adjustments for inflation), they could wait until age 66 and collect $16,000 per year (33 percent more with additional adjustments for inflation from age 62 to 66) and every year thereafter. By beginning to collect benefits at age 62 they would have collected a total of $48,000 by age 66, and could then purchase an inflation-adjusted annuity to provide income to make up the difference. However, the cost of the annuity for a single male would be 47.4 percent more than the $48,000 they could collect from age 62 through 65. (See fig. 3.) Most of today’s retirees have taken early (and therefore, reduced) Social Security benefits, though increasing numbers of people of retirement age are also working. While most with DB pensions are receiving lifetime retirement income, few have purchased annuities with DC or other assets. Retirement age investors generally have limited allocations in stocks. Though most retirees tap their financial assets gradually, some exhaust their resources and many, particularly those in the oldest age group, live in poverty. The experts we talked with frequently recommend that retirees delay taking Social Security to increase their lifetime retirement income, but most of today’s retirees took Social Security before their full retirement age, which has committed many to substantially lower monthly benefits than if they had waited. Among those who were eligible to take benefits within 1 month after their 62nd birthday from 1997 through 2005, 43.1 percent did so, according to Social Security administrative data compiled by the Office of the Chief Actuary. An estimated 72.8 percent took benefits before age 65, and only 14.1 percent took benefits the month they reached their full retirement age, which varied from age 65 to age 66 depending on birth year. In addition, only about 2.8 percent took benefits after their 66th birthday. By taking the benefits on or before their 63rd birthday, 49.5 percent of beneficiaries born in 1943 passed up increases of at least 25 to 33 percent in monthly inflation-adjusted benefits that would have been available, had they waited until their full retirement age. (See fig. 4.) This early retirement pattern changed little over the 1997 to 2009 period, while under law enacted in 1983, the Social Security full retirement age shifted by birth year from age 65 to 66 for those born 1938 to 1943. The proportion of those who took benefits the first month they were eligible declined from 47.2 percent to 39.4 percent, but the percentage of those who waited until the month they reached their respective full retirement age also decreased—from 17.4 to 13.9 percent. While most people who are collecting Social Security retirement benefits do not work, many do continue working at an older age. As shown in figure 5, the proportion of older adults in the workforce has increased over the last several years. These increases in labor force participation may, in part, have arisen in response to changes in the Social Security law effective in 2000 that eliminated penalties for earning wages while collecting Social Security benefits after their full retirement age. With these changes, more people who are eligible or receiving benefits are working. Experts we spoke to generally recommend taking lifetime retirement income, and most workers leaving employment with a DB pension and retiring received lifetime retirement income from their DB annuity. An estimated 67.8 percent of workers who left employment and retired with a DB pension from 2000 through 2006 commenced the DB annuity; fewer deferred benefits. (See fig. 6.) Limited data suggest that among retiring workers who indicated they had an option to take a cash settlement, IRA rollover, or an annuity, an estimated 8.6 percent took a cash settlement, and 10.3 percent rolled over funds to an IRA. (See app. IV, table 14.) As most retirees leaving employment with a DB pension and retiring receive an annuity benefit, many households with retirees have some pension or annuity income (apart from Social Security). In 2008, an estimated 40.7 percent of households with a member aged 65 or older received pension or other annuity income. The experts we spoke with recommended that retirees enhance their guaranteed income by purchasing an annuity with some limited portion of their savings, yet few workers leaving employment with DC pensions and retiring (6.1 percent) converted their funds or a portion of the money to an annuity. (See fig. 7.) An estimated 38.8 percent that reported leaving employment with a DC pension and retiring during the 2000 to 2006 period left funds in the account, and 30.3 percent rolled them over to an IRA. Fewer chose to take a withdrawal (15.8 percent). This analysis, however, only reveals the decisions that retirees made immediately or soon after leaving employment. In some cases some of the retirees may have purchased annuities at a later time. Although traditional insured life annuities provide predictable lifetime retirement income, the amounts of income they provided retirees has been modest. The vast majority of annuity sales are sales of deferred annuities—annuities that provide purchasers investment opportunities to increase savings while deferring federal income taxes with an option to draw a guaranteed lifetime retirement income stream at a later time. However, purchasers of these annuities typically do not convert them to an income stream. In 2009, 94.4 percent of annuity sales were deferred annuities ($225 billion of the $239 billion). In contrast, sales of traditional fixed immediate annuities purchased to provide lifetime retirement income totaled about $7.5 billion (3.1 percent of total sales). This represents a small portion of retirees’ assets (an estimated 1.5 percent of the IRA and nonpension financial assets held by those aged 66 in 2008, for example). If this amount had been used to purchase 100 percent joint and survivor immediate annuities for all those aged 66, these annuities would provide only an estimated 0.26 percent of this group’s aggregate total household income. Annuities can be purchased with either pension assets on which income taxes have been deferred (tax qualified) or with other assets. In 2009, more than half (57.9 percent) of the amount of annuities purchased came from tax-qualified sources. Although experts we spoke to recommended a moderate exposure to stocks to support a retirement income drawdown strategy, households near retirement had a wide range of allocations to stocks (equities), according to analysis by EBRI. In the volatile stock market from 2005 to 2009, allocations to equities declined among older 401(k) investors (those in their 60s). While some of the decrease in allocations to equities may have resulted from the decline in stock prices relative to bond prices, some reflects investors’ decisions to reduce allocations to stocks. During 2008, for example, investors withdrew a net total of $234 billion from stock funds and added a net $28 billion to their bond fund holdings, according to the Investment Company Institute. The proportion of 401(k) investors with no allocations to equities changed little, but the proportion with allocations of 80 percent or more of their assets to equities fell from 32.6 percent to 22.3 percent. (See fig. 8.) By the end of 2009, smaller proportions of 401(k) investors in their 60s held high proportions of their balances in equities than younger investors. Although certain experts we spoke with recommended that some retirees hold between 40 and 60 percent of financial assets in stocks, about one- fifth (20.3 percent) of 401(k) investors aged 60 to 69 had such allocations, according to EBRI’s analysis. (See fig. 9.) Although many retirees lack substantial savings, most have some savings and have typically drawn on those savings gradually, as the experts we spoke to recommend. According to Urban Institute researchers’ analysis of associations between household assets, age and income data from HRS survey responses gathered over the 1998 to 2006 period, individuals in the highest income quintile typically accumulated wealth, at least until their eighties. Those in the middle income quintile typically started to spend down wealth at somewhat earlier ages, but, as the experts we spoke to recommended, gradually enough to likely have assets when they die. Those in the lowest income quintile typically have few nonannuitized assets and spend them fairly quickly. Economists’ analysis of U.S. Census survey data from 1997, 1998, 2001, 2002, 2004, and 2005 indicate a comparatively modest rate of withdrawals prior to the age at which the Code required minimum distribution requirements apply. Also, as a household gradually draws down and consumes the principal of their savings, their living expenses, rising with inflation, will be an ever bigger portion of their declining principal. Although many retirees draw on resources gradually, some older people are at risk of outliving their financial assets, particularly if a significant adverse health event occurs. Our analysis of HRS data indicates that among individuals born in 1930 or earlier that had net household financial assets of $15,000 or more in 1998, an estimated 7.3 percent of those alive in 2008 had net financial assets of $2,000 or less. Entering a nursing home is associated with substantial declines in household wealth for households with a person aged 70 or older. Although several experts we spoke to recommended it, few retirees purchase long-term care insurance to protect themselves from some of the risk that they will be impoverished by having to pay for nursing home services and certain assisted living services, as premiums can be expensive. Apart from whether individuals outlive their assets, millions of retirees live in poverty late in life. Even with the widespread availability of Social Security, Medicare, and Medicaid benefits, in 2009 an estimated 3.4 million people aged 65 or older lived in poverty. The poverty rate for this age group (8.9 percent), however, was lower than for all U.S. residents (14.3 percent). On the other hand, poverty among women aged 75 and older is much greater than for men. During the 2005 to 2009 period, the Census Bureau estimated that 13.5 percent of women in this age group had incomes below the poverty line in the previous year compared with 7.7 percent of men. In the future, it is unclear to what extent similar patterns will hold for retirees. For example, investment returns may differ from historical rates of return. Also, DB plans and the lifetime retirement income that retirees frequently received were more common for current retirees. The shift away from DB plans toward DC plans may mean that increased retirement savings and other options for generating retirement income from savings, such as annuities, might become more important for retirees in the future. Multiple experts told us about increasing lifetime retirement income by purchasing an annuity, but DC plans typically do not offer access to annuities and their participants infrequently use annuities when leaving employment and retiring. The February 2010 Labor/Treasury RFI asked about ways to facilitate access to lifetime retirement income products such as annuities in DC plans, and a number of policy options were proposed by respondents. (See table 4.) These policy options in responses to the RFI came from industry, consumer, academic, and other groups. According to several respondents who favored this option, revising the safe harbor provision would have an advantage of helping to ease concerns of some sponsors of DC plans about offering an annuity as a payout choice. In turn, the availability of an annuity to plan participants could possibly increase the number of retirees who consider it as a way to withdraw pension benefits for predictable lifetime retirement income. Additionally, this could help participants who would otherwise purchase an annuity in the retail market on terms that might not be as favorable. For example, annuities, especially in larger plans, might be available at institutional prices and thus at lower prices than on the retail market. Annuities at group rates typically have lower prices than individual annuities. Participants might also benefit from the fact that the plan fiduciaries are required to fulfill fiduciary responsibilities for the annuity selection, including the prudent selection and monitoring of products and providers offered in the plan. Individuals on their own might be less likely to be in a position or to have experience to conduct as thorough and analytical a selection as the plan fiduciary, who is required to conduct a diligent analysis as a fiduciary. However, revising the safe harbor provision could expose participants to additional risks, including the risk that the insurance company providing annuities becomes insolvent and unable to make promised payments. Depending on the specific features of a policy change in this area, it could have the effect of lessening protections and recourse for participants, as compared to the current regulation. For example, some industry respondents proposed eliminating, modifying, or providing specific criteria for the condition in the safe harbor that requires sponsors to assess the ability of an insurance company to make all future payments under an annuity contract. Labor officials said that protecting participants against the risk of insurer insolvency is a key issue as they consider revisions to the safe harbor regulation, given that retirees may depend on annuities for decades. The insolvency of Executive Life Insurance Company in the early 1990s is a case in point. While states are generally responsible for insurance regulation including the solvency of insurers, the degree of regulation can vary in some aspects. There is also variation in the protections of state guaranty associations to cover policyholders. For example, all state guaranty associations generally protect an annuity’s value up to at least $100,000. According to an official from the National Organization of Life and Health Insurance Guaranty Associations, as of May 2011, roughly two-thirds of the associations provide coverage of $250,000 or more, and roughly one-third have limits of at least $100,000 for annuities. Given such variation, some respondents raised the possibility of providing a federal guarantee to help states protect policyholders in cases of insurer insolvency. Some consumer and other groups recommended requiring DC plan sponsors to offer annuities as a choice to plan participants, which would require legislative efforts to amend ERISA or the Code. This would make the availability of lifetime retirement income more widespread, although the effect such amendments might have on the rate of participants’ adoption of annuities is uncertain. Since its passage in 1974, ERISA has required DB plans to offer such a choice. Similarly, DC plans could be required to offer the choice of an annuity for income in retirement. However, even with greater access to annuities in their plans, participants frequently have foregone this opportunity for lifetime retirement income and many may continue not to use this choice for lifetime retirement income. From the sponsors’ perspective, such a requirement could impose greater costs and administrative burdens, and possibly increase their exposure to fiduciary liability. For example, this might involve the selection and monitoring of an annuity provider, including costs to hire any experts to assist with these decisions. As we have previously reported, sponsors may be concerned about being held liable for these decisions and paying any losses to participants in the event the annuity provider cannot meet its financial obligations. Also, the requirements for qualified joint and survivor annuities, including spousal consent to waive the qualified joint and survivor annuity, present administrative burdens and costs, according to several industry groups. A few industry or other groups noted that the administrative burdens or risk of lawsuits could even lead some employers, such as small employers, not to carry DC plans at all. A default arrangement could increase the use of annuities without an affirmative decision from participants to do so. Certain respondents noted that, to the extent that participants are unlikely to opt out of the default annuity, use of annuities would increase. Accordingly, automatic enrollment and default investments have been adopted in some DC plans when workers save for retirement, partly to overcome such tendencies as procrastinating or not making decisions. With the declining availability of DB plans and the lifetime retirement income they frequently provide, a default annuity in DC plans could help to promote lifetime retirement income for more participants. Other respondents or experts have noted disadvantages with default annuities, such as irreversibility or financial penalties. Unlike automatic enrollment or default investments to save for retirement, annuitization by default may not allow for a subsequent change. For some participants, default immediate life annuities may not be appropriate given their health and other circumstances. Other types of annuities, such as deferred variable annuities, provide more flexibility to reallocate investments or make withdrawals, yet surrender and other charges and fees may apply. Another disadvantage to a default annuity would be setting a standard level of how much to use for the annuity. The appropriate portion to annuitize may vary among participants, given their particular circumstances such as other sources of income. Deeply deferred annuities, or “longevity insurance,” which initiate payments at an advanced age, could provide protection against longevity risk and could do so at a substantially lower price than a traditional immediate annuity. For example, according to one association, the cost of a deeply deferred annuity purchased at age 65 with payments beginning at age 85 is approximately 10 to 15 percent of the cost of an annuity providing the same amount of income that begins payments immediately. Also, longevity insurance provides income at advanced ages, when risks of poverty or outliving assets among the elderly may rise, and sets a finite period for systematic or other withdrawals to last. While longevity insurance is available on the retail market, current provisions for required minimum distributions make it challenging to offer this product in DC plans or IRAs, according to certain industry groups. Longevity insurance purchased with tax-deferred funds can pose problems for taxpayers if the insurance does not permit annuity payments to be made until a date that is substantially after minimum distributions must begin—for example, if the contract provides for no payments to be made until age 85. On the other hand, questions exist about this newer product, according to Treasury officials and certain academic experts. For example, it is unclear to what extent older people might understand and be willing to purchase deeply deferred annuities whose payments may not begin for decades, if at all. Further, a proposed exemption from minimum distributions could potentially reduce revenue to the federal government since a tax exemption for deeply deferred annuities would result in some foregone revenue, although the extent of any foregone revenue is unclear. However, the purpose of the minimum distribution provisions is to ensure that tax- deferred retirement saving is used for retirement rather than estate planning purposes. Depending on how tax expenditures are structured, they also may raise questions about fairness, such as the extent to which low- or high-income individuals would benefit from a proposed exemption. According to several industry groups, changes in requirements about qualified joint and survivor annuities (QJSA), including the procedures to document the spouse’s consent, could lower administrative burdens and costs so that sponsors might become more willing to make annuities available. A QJSA generally guarantees payments for the life of the participant and the participant’s surviving spouse. Some plans, including DB plans, are subject to requirements to offer a QJSA as a default and obtain spousal consent to not elect the joint and survivor annuity. For DC plans that are subject to the requirements for some or all participants, part of the procedures to elect a distribution other than the QJSA include notarized or in-person consent by the spouse, which some industry groups described as burdensome. However, these procedures have helped to protect spouses of participants with decisions about lifetime retirement income. For example, in DB plans, QJSA requirements under the Retirement Equity Act of 1984 and its implementing regulations sought to ensure that spouses are aware and consent to a pension distribution other than a joint annuity that would provide payments throughout their retirement. The QJSA procedures for DB plans do not apply uniformly to DC plans, and we have previously reported that spousal protections in DC plans already have limitations. For example, a plan participant may withdraw from or roll over an account balance without the consent of his or her spouse. Women on average continue to live longer and be more vulnerable to poverty at older ages than men, and reducing QJSA requirements might further lessen spousal protections in DC plans as compared to DB plans. Improving individuals’ financial literacy can be one important component in helping them manage retirement income appropriately. Financial literacy can be described as the ability to make informed judgments and to take effective actions regarding the current and future use and management of money. One way of improving consumer financial literacy is through financial education—that is, the processes whereby individuals improve their knowledge and understanding of financial products, services, and concepts. A wide variety of delivery mechanisms exist to provide financial education, including classroom curricula, print materials, Web sites, broadcast media, and individual counseling. As we recently testified, at the federal level, more than 20 federal agencies have programs or initiatives related to financial literacy and these efforts are coordinated by the Financial Literacy and Education Commission (FLEC). Ensuring the financial literacy of older people has become particularly important given the transition to a financial account-based retirement system and the increasing responsibility of individuals to manage their assets in retirement. According to many respondents as well as experts we interviewed, education aimed at helping manage retirement income should cover, in particular, the financial risks faced in retirement, such as longevity risk, inflation risk, and investment risk, among others. Appropriate financial education can help prevent individuals from over- estimating their expected investment returns or sustainable withdrawal rates, which might make it more difficult to maintain their lifestyle in retirement. It can also serve to help individuals understand various difficult choices to mitigate these risks as well as how to evaluate or compare choices, such as what factors to consider. Such education can be particularly important given the complexity of annuities and other retirement investment vehicles. Besides annuities, managing a lump sum distribution and approaches that combine annuities and more liquid assets are other choices for individuals. Individuals or plan sponsors might not be aware that they can pursue combinations of income in retirement, such as annuitizing part of the pension benefit, rather than just all or none of it. Having adequate information on the variety of options available—and their corresponding advantages and disadvantages—allows individuals to tailor their decisions to their particular circumstances. Various entities proposed policy options that seek to better inform individuals about income in retirement, and these options use different approaches, such as financial education or notices involving pensions. Multiple policy options, such as those offered in response to the RFI or in reports we reviewed, could work together to improve financial literacy on income throughout retirement. (See table 5.) Some industry groups or academic experts stated that financial education alone has its limitations and is not the only approach for improving consumers’ financial behavior. Financial education may sometimes be more useful as a complement to other tools, such as personalized investment advice or policy options like the use of defaults. Currently, federal agencies provide some educational resources for the general public about income in retirement as part of their efforts on financial education. Certain agencies, such as SSA and Labor, have taken various steps, as shown in table 6. We found that few other resources on how to ensure income throughout retirement were available from the federal government. With federal financial education, much of the retirement focus has typically been on saving for retirement. Although many sources of information are available from the private sector, the federal government may be in a position to contribute to financial education on managing pension and other financial assets in retirement. The federal government can produce objective information and partner with organizations outside of the government to deliver its materials, which we have previously reported. Leveraging partnerships with public and private sector stakeholders, the federal government may help to reach many target audiences. This could include those without plan sponsors such as the roughly half of the private sector workforce not participating in a pension or those who have rolled over pension assets to an IRA. Meanwhile, certain research suggests that information from various financial service companies may raise some concerns about possible limitations or conflicts of interest. Regarding conflicts of interest, we recently reported that participants in 401(k) plans may be unaware that service providers, when furnishing education, may have undisclosed financial interests, including on investment funds in their plan or products outside the plan from roll-over balances. Older people without pension plans or who have withdrawn funds from their plans may receive information on products that are not in their best interest or even fraudulent. On the other hand, certain educational materials from the federal government on income throughout retirement may have some limitations. For example, Labor officials told us that their educational materials on this topic may be fairly general, and plan sponsors may be more aware of participants’ circumstances and could better tailor retirement education accordingly. In 2003, we recommended that Congress consider amending ERISA so that it specifically requires plan sponsors to provide participants with a notice on risks that individuals face when managing their income and expenditures at and during retirement. The notice could be provided at certain key milestones, including when a participant separates from service or at retirement. Although this policy option has not been enacted, ERISA requires sponsors of DC plans to provide participants a notice as part of their quarterly benefit statements about the benefits of a well- balanced and diversified portfolio as they save for retirement, which includes a link to a Labor Web site for further information. According to Labor and Treasury officials, plan sponsors are not required to provide a notice to participants on managing pension assets in retirement, such as the general financial risks and choices they face. Once retired or outside their plan, individuals might be more susceptible to sales of products that are not in their best interest or even constitute fraud. Without additional information reinforced over time while participating in the plan, participants could later make decisions that fail to sustain their incomes and, as a result, potentially place a heavier burden on public need-based assistance or other resources. Labor has provided an interpretive bulletin on participant investment education as distinguished from investment advice in plans, but many respondents observed that this bulletin and industry efforts generally focus on saving for retirement, rather than on income throughout retirement. According to a few industry groups, greater clarity on education as distinguished from investment advice, as related to income in retirement, may allay sponsors’ and service providers’ fears of fiduciary liability by explaining the types of general information on income in retirement that would not be considered to be investment advice. With such clarity, more sponsors and service providers may pursue voluntary efforts to educate plan participants in general on income and expenses in retirement. Sponsors with assistance from providers could tailor such education to their plan participants. Some plans already offer such education. However, any future guidance from Labor on investment education about income in retirement, if poorly implemented, could have potential disadvantages. For example, we recently recommended that Labor evaluate and revise its interpretive bulletin on investment education, including the ability to highlight proprietary funds which may result in greater revenue to the service provider. As Labor officials consider possible guidance on income in retirement, they said that an inappropriate balance between education and advice could result in plan participants receiving so-called “education” from service providers with conflicts of interest and not having recourse against fiduciaries. According to Labor officials, education on income throughout retirement may also involve spending plan assets to varying extents on choices not available in the plan, which could potentially be challenged as unreasonable expenses from plan assets under certain circumstances. Further, while guidance could encourage sponsors to voluntarily provide education, it may not require it. Some sponsors might not provide education on income throughout retirement due to reasons other than fiduciary concerns, such as costs or not viewing it as their role. Given the rise of DC plans which provide pension benefits as an account balance, many industry, consumer, and academic groups noted that an estimate on the participant benefit statement could present, or “frame,” the pension benefit as a stream of income in retirement rather than just an account balance, which could help to change how participants in DC plans perceive or ultimately withdraw their benefit at retirement. For example, the Thrift Savings Plan, a DC plan for federal workers, recently began to include such an estimate on annual statements for participants, and representatives of a service provider for other plans told us it does so on quarterly statements. In addition, including an estimate of annuity income, as the Lifetime Income Disclosure Act would require if passed, could improve retirement planning by indicating the estimated income stream available based on a worker’s account balance. This may be a difficult calculation for participants, according to certain experts we interviewed. As workers save for retirement, seeing an estimated monthly or annual income stream as well as an account balance could possibly help them to increase saving and understand how much they actually need to save to last throughout retirement. However, this proposed option is subject to many assumptions and complexities, and certain industry or consumer groups expressed concerns that an estimate could potentially confuse or discourage participants. Although the current account balance may be simpler to convert to an annuity estimate, a few industry groups cautioned that such an estimate of annuity income could be quite low in some cases and might even discourage saving by those with smaller balances, such as younger participants. However, an estimate based on a projection of the worker’s future balance at retirement would entail additional assumptions, such as future rates of return, and raise questions about how to account for investment risk, if at all. Another area of complexity is the level of uniformity or flexibility with assumptions. While some industry groups noted that the federal government could provide uniformity and consistency across plan sponsors by prescribing assumptions for sponsors to use, other industry groups preferred flexibility, such as tailoring estimates to a plan’s actual annuity products. Given the long-term trends of rising life expectancy and the shift from DB to DC plans, aging workers must increasingly focus not just on accumulating assets for retirement but also on how to manage those assets to have an adequate income throughout their retirement. Workers are increasingly finding themselves depending on retirement savings vehicles that they must self-manage, where they not only must save consistently and invest prudently over their working years, but must now continue to make comparable decisions throughout their retirement years. Even for the minority of workers with significant retirement savings, making their savings last may prove challenging. However, for the majority of workers who approach retirement with small account balances— workers with balances of $100,000 or less—the stakes are far greater. For those with little or no pension or other financial assets, ensuring income in retirement may involve difficult choices, including how long to wait before claiming Social Security benefits in order to receive higher benefits, how long to work, and how to adjust consumption and lifestyle to lower levels of income in retirement. Social Security benefits serve as the foundation of income in retirement and a key source of lifetime retirement income, but many older people claim benefits at the earliest age and pass up the opportunity for a higher monthly benefit beginning at full retirement age or later. By claiming benefits early, whether for health or other important reasons, individuals take a smaller benefit when they could potentially work longer and receive a higher monthly benefit. Although retirement savings may be larger in the future as more workers have opportunities to save over longer periods through strategies such as automatic enrollment in DC plans, many will likely continue to face little margin for error. Poor or imprudent investment decisions may mean the difference between a secure retirement and poverty. Even for the half of the workforce participating in pension plans, employers as plan sponsors are currently not required to provide notices on the financial risks and choices that participants face in retirement. In our 2003 report, we included a Matter for Congressional Consideration to require sponsors to provide a notice to plan participants on risks in retirement. With the ongoing shift in pension plans and the transition from lifetime retirement income toward account balances, we believe that this continues to be important. Absent such a requirement, many more workers may likely face key retirement decisions without sufficient knowledge to decide which choices are in their best interest. Without objective information from employers and the federal government, even those retirees who have adequate savings may be at risk of not having sufficient retirement income. For those in the already large segment of the population depending on limited retirement savings, making prudent choices is especially important and difficult. We provided officials from the Department of the Treasury, IRS, Department of Labor, SEC, and the National Association of Insurance Commissioners with a draft of this report. The Department of the Treasury provided comments indicating that the report is a helpful addition to the dialogue and analysis regarding the topic. See appendix VI. Officials from the Department of the Treasury, IRS, Department of Labor, SEC, and the National Association of Insurance Commissioners provided technical comments that we incorporated in the report, where appropriate. We also provided a copy of the draft to officials from SSA for a technical review, and they also provided technical comments that we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of the Treasury, Commissioner of Internal Revenue, Secretary of Labor, Chairman of the Securities and Exchange Commission, Chief Executive Officer of the National Association of Insurance Commissioners, Commissioner of the Social Security Administration, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. To identify the strategies experts recommend retirees employ to ensure income throughout retirement we interviewed a judgmental sample of a range of financial planners and other financial experts from different academic and industry organizations and a retiree interest group, which were from different geographic areas of the country. As part of these interviews, to ensure we identified strategies that apply to households across the net wealth spectrum and with both defined benefit (DB) and defined contribution (DC) pension plans, we randomly selected five households from the Health and Retirement Study (HRS) conducted by the University of Michigan in the lowest, middle, and highest net wealth quintiles with different combinations of pension plans in the middle and highest quintiles. See appendix III for selected characteristics of these five households. See appendix II for selected financial and demographic data about these net wealth groups. The HRS is a nationally representative longitudinal survey of older adults sponsored by the National Institute on Aging and the Social Security Administration. The survey is administered in waves (generally every 2 years) and includes information on respondent demographics, health status, service receipt, and household characteristics, among other things. An additional HRS dataset, produced by the RAND Corporation, includes recoded variables and more detailed information on household finances. Using RAND’s March 2010 compilation of HRS data for waves 1992 through 2008 and HRS data compiled by Gustman, et al., we identified these net wealth groups using 2008 total net wealth data from RAND (including second homes) as well as the present value of households’ DB and DC pensions in 2006. We limited our sample to households with a member nearing typical retirement age (aged 55 to 60) in 2008 and adjusted income and asset values for inflation to 2008 dollars. These net wealth estimates did not include the present value of expected Social Security benefits. We assessed the reliability of the data we used by reviewing pertinent system and process documentation, interviewing knowledgeable officials, and conducting electronic testing on data fields necessary for our analysis. We found the data we reviewed reliable for the purposes of our analysis. We drew a random selection of five typical households from the first (lowest), third (middle), and fifth (highest) net wealth quintiles. To do so, we further restricted our analysis to households with net wealth within 10 percent of the median for each of these three quintile groups. For example, for the lowest quintile, median net wealth was $2,000 so we selected households with net wealth in the $1,800 to $2,200 range. Based on data for the first (lowest) quintile (see app. III), we selected a single- person household with neither a DB nor a DC pension, two or three living children (not necessarily living in the household), who reported being in “fair” or “good health,” and who did not own a house. Based on data for the third (middle) quintile, we selected two households consisting of married couples that owned their home, with either the respondent or spouse in “good” or “very good” health, and with two living children. From this quintile we selected one couple with only a DB pension and another with only a DC pension. Based on data for the fifth (highest) quintile, we selected two households consisting of married couples that owned their home. We selected one with either the respondent or spouse in “good” or “very good” health, two living children, and who had both a DB and a DC pension. We selected another couple from this quintile with only a DB pension, with members in “fair”, “good”, or “very good” health, and no restriction concerning the number of their living children. This procedure provided five households with characteristics approximately equal to median values for their net wealth quintile in these respects, but may not be in other ways. We shared data on these households with the experts we interviewed and discussed the strategies that the experts would recommend these households’ utilize and their trade-offs. See the households’ summary financial data in appendix III. We also reviewed company-specific financial product documentation and studies of retirement income strategies such as those describing systematic withdrawals from retirement savings, including the results of Monte Carlo simulations. To review the choices retirees have made for managing their pension and financial assets for generating income, we analyzed data from the HRS, reviewed others’ analyses of the HRS, and analyzed data from the Social Security Administration, compiled by the Office of the Chief Actuary. We reviewed other data sources including data on retirement account holdings from the Employee Benefit Research Institute, labor force participation data from the Bureau of Labor Statistics, and poverty estimates from the Census Bureau’s Current Population Survey. We analyzed data concerning the disposition of pensions using HRS data, including data compiled by RAND and Gustman, et al. We restricted this analysis to workers that reported leaving employment with a DB or DC pension plan and retiring between 2000 through 2006. We also included only respondents that were in the HRS data set during each wave, 2000 through 2006. Furthermore, we assembled and analyzed data for a subset of these respondents that provided information concerning the availability of a lump sum option for their DB pension in the same HRS wave in which they reported a pension disposition. To identify policy options that are available to ensure income throughout retirement as well as their advantages and disadvantages, we collected and reviewed information representing a variety of academic, consumer, industry, and government sources. We analyzed over 40 public comments from diverse groups submitted in response to the Department of Labor’s (Labor) and the Department of the Treasury’s (Treasury) 2010 request for information (RFI) on lifetime income, and at relevant congressional and Treasury-Labor department hearings. In addition to the RFI submissions, we also reviewed other publications from a variety of academic, consumer, and industry sources. We reviewed reports from Labor’s Employee Retirement Income Security Act (ERISA) Advisory Council, and financial literacy materials on retirement income available from federal agencies including the online version of Labor’s Taking the Mystery Out Of Retirement Planning and the Financial Literacy and Education Commission’s Web site, www.MyMoney.gov. We conducted interviews with a variety of academic, consumer, and industry sources. Interviews with officials of federal government agencies included Labor, the Securities and Exchange Commission (SEC), Treasury, the Internal Revenue Service (IRS), and Treasury staff of the Financial Literacy and Education Commission. Lastly, we reviewed applicable federal laws and regulations. These demographic and financial characteristics are for households in the HRS in which either the respondent or spouse was in the 55 to 60 age range in 2008. Except as noted, the income figures apply to income in 2007 and asset figures apply to assets at the time of the 2008 HRS interview, typically mid-2008. Estimates are expressed in 2008 dollars. See table 8 for confidence intervals of these household characteristics. Table 8 presents the confidence intervals for data in table 7, based on a 95 percent confidence level. Below are selected demographic and financial characteristics of five households whose retirement prospects we discussed with financial planners and retirement income experts. We randomly selected these households from a sample of near-retirement households in the HRS in which the respondent and spouse were in the 55 to 60 age range in 2008. We selected one household from households in the lowest of five net wealth groups, two households from the households in the middle net wealth group, and two households in the highest net wealth group. Table 14 provides estimates and confidence intervals for estimates of the percentage of workers who reported the disposition of their pension upon leaving work with a DB pension and retiring. Based on analysis of our sample of HRS respondents, we are 95 percent confident that the actual proportion of workers is between the low and high percentage indicated in each cell. See appendix I for details concerning our methodology for developing these estimates. Table 15 addresses the dispositions of DC pensions by workers who left employment with a pension and retired. Table 16 describes selected types of arrangements which are tax- advantaged and products that may provide retirement income. They include tax-advantaged retirement arrangements, annuity products, and investment products. This list is not meant to be exhaustive, but rather to provide a sense of certain types of financial arrangements and products that may provide income throughout retirement. In addition to the contact named above, Michael J. Collins, Assistant Director; Joseph A. Applebaum; Carl S. Barden; Susan C. Bernstein; Jason A. Bromberg; Michael Brostek; Tara E. Carter; Patrick S. Dynes; Sharon L. Hermes; Mitchell B. Karpman; Gene G. Kuehneman Jr.; Mimi Nguyen; Benjamin P. Pfeiffer; Bryan G. Rogowski; Matthew J. Saradjian; Roger J. Thomas; Frank Todisco; Karen C. Tremba; and Walter K. Vance made key contributions to this report. 401(K) Plans: Improved Regulation Could Better Protect Participants from Conflicts of Interest. GAO-11-119. Washington, D.C.: January 28, 2011. Defined Contribution Plans: Key Information on Target Date Funds as Default Investments Should Be Provided to Plan Sponsors and Participants. GAO-11-118. Washington, D.C.: January 31, 2011. Consumer Finance: Regulatory Coverage Generally Exists for Financial Planners, but Consumer Protection Issues Remain. GAO-11-235. Washington, D.C.: January 18, 2011. Social Security Reform: Raising the Retirement Ages Would Have Implications for Older Workers and SSA Disability Rolls. GAO-11-125. Washington, D.C.: November 18, 2010. State and Local Government Pension Plans: Governance Practices and Long-term Investment Strategies Have Evolved Gradually as Plans Take On Increased Investment Risk. GAO-10-754. Washington, D.C. August 24, 2010. Retirement Income: Challenges for Ensuring Income throughout Retirement. GAO-10-632R. Washington, D.C.: April 28, 2010. Social Security: Options to Protect Benefits for Vulnerable Groups When Addressing Program Solvency. GAO-10-101R. Washington, D.C.: December 7, 2009. Retirement Savings: Automatic Enrollment Shows Promise for Some Workers, but Proposals to Broaden Retirement Savings for Other Workers Could Face Challenges. GAO-10-31. Washington, D.C.: October 23, 2009. Retirement Savings: Better Information and Sponsor Guidance Could Improve Oversight and Reduce Fees for Participants. GAO-09-641. Washington, D.C.: September 4, 2009. Private Pensions: Alternative Approaches Could Address Retirement Risks Faced by Workers but Pose Trade-offs. GAO-09-642. Washington, D.C.: July 24, 2009. Financial Literacy and Education Commission: Progress Made in Fostering Partnerships, but National Strategy Remains Largely Descriptive Rather Than Strategic. GAO-09-638T. Washington, D.C.: April 29, 2009. Private Pensions: Conflicts of Interest Can Affect Defined Benefit and Defined Contribution Plans. GAO-09-503T. Washington, D.C.: March 24, 2009. Individual Retirement Accounts: Additional IRS Actions Could Help Taxpayers Facing Challenges in Complying with Key Tax Rules. GAO-08-654. Washington, D.C.: August 14, 2008. Defined Benefit Pensions: Plan Freezes Affect Millions of Participants and May Pose Retirement Income Challenges. GAO-08-817. Washington, D.C.: July 21, 2008. Private Pensions: Fulfilling Fiduciary Obligations Can Present Challenges for 401(k) Plan Sponsors. GAO-08-774. Washington, D.C.: July 16, 2008. Individual Retirement Accounts: Government Actions Could Encourage More Employers to Offer IRAs to Employees. GAO-08-590. Washington, D.C.: June 4, 2008. Private Pensions: Low Defined Contribution Plan Savings May Pose Challenges to Retirement Security, Especially for Many Low-Income Workers. GAO-08-8. Washington, D.C.: November 29, 2007. Retirement Security: Women Face Challenges in Ensuring Financial Security in Retirement. GAO-08-105. Washington, D.C.: October 11, 2007. State and Local Government Retiree Benefits: Current Status of Benefit Structures, Protections, and Fiscal Outlook for Funding Future Costs. GAO-07-1156. Washington, D.C.: September 24, 2007. Retirement Decisions: Federal Policies Offer Mixed Signals about When to Retire. GAO-07-753. Washington, D.C.: July 11, 2007. Defined Benefit Pensions: Conflicts of Interest Involving High Risk or Terminated Plans Pose Enforcement Challenges. GAO-07-703. Washington, D.C.: June 28, 2007. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and the Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. Private Pensions: Changes Needed to Provide 401(k) Plan Participants and the Department of Labor Better Information on Fees. GAO-07-21. Washington, D.C.: November 16, 2006. Baby Boom Generation: Retirement of Baby Boomers Is Unlikely to Precipitate Dramatic Decline in Market Returns, but Broader Risks Threaten Retirement Security. GAO-06-718. Washington, D.C.: July 28, 2006. Social Security Reform: Answers to Key Questions. GAO-05-193SP. Washington, D.C.: May 2005. Older Workers: Labor Can Help Employers and Employees Plan Better for the Future. GAO-06-80. Washington, D.C.: December 5, 2005. Redefining Retirement: Options for Older Americans. GAO-05-620T. Washington, D.C.: April 27, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Consumer Protection: Federal and State Agencies Face Challenges in Combating Predatory Lending. GAO-04-280. Washington, D.C.: January 30, 2004. Private Pensions: Participants Need Information on Risks They Face in Managing Pension Assets at and during Retirement. GAO-03-810. Washington, D.C.: July 29, 2003. Retiree Health Insurance: Gaps in Coverage and Availability. GAO-02-178T. Washington, D.C.: November 1, 2001. Pension Plans: Characteristics of Persons in the Labor Force Without Pension Coverage. GAO/HEHS-00-131. Washington, D.C.: August 22, 2000. Social Security Reform: Implications of Raising the Retirement Age. GAO/HEHS-99-112. Washington, D.C.: August 27, 1999. Social Security Reform: Raising Retirement Ages Improves Program Solvency but May Cause Hardship for Some. GAO/T-HEHS-98-207. Washington, D.C.: July 15, 1998.
As life expectancy increases, the risk that retirees will outlive their assets is a growing challenge. The shift from defined benefit (DB) pension plans to defined contribution (DC) plans also increases the responsibility for workers and retirees to make difficult decisions and manage their pension and other financial assets so that they have income throughout retirement. GAO was asked to review (1) strategies that experts recommend retirees employ to ensure income throughout retirement, (2) choices retirees have made for managing their pension and financial assets for generating income, and (3) policy options available to ensure income throughout retirement and their advantages and disadvantages. GAO interviewed experts about strategies retirees should take, including strategies for five households from different quintiles of net wealth (assets less debt); analyzed nationally representative data and studies about retirees' decisions; and interviewed experts and reviewed documents about related policy options. Financial experts GAO interviewed typically recommended that retirees systematically draw down their savings and convert a portion of their savings into an income annuity to cover necessary expenses, or opt for the annuity provided by an employer-sponsored DB pension instead of a lump sum withdrawal. Experts also recommended that individuals delay receipt of Social Security benefits until reaching at least full retirement age and, in some cases, continue to work and save, if possible. For example, for the two middle net-wealth households GAO profiled with about $350,000 to $375,000 in net wealth, experts recommended purchase of annuities with a portion of savings, drawdown of savings at an annual rate, such as 4 percent of the initial balance, use of lifetime income from the DB plan, if applicable, and delay of Social Security. To navigate the difficult choices on income throughout retirement, they noted strategies depend on an individual's circumstances, such as anticipated expenses, income level, health, and each household's tolerance for risks, such as investment and longevity risk. Regarding the choices retirees have made, GAO found that most retirees rely primarily on Social Security and pass up opportunities for additional lifetime retirement income. Taking Social Security benefits when they turned 62, many retirees born in 1943, for example, passed up increases of at least 33 percent in their monthly inflation-adjusted Social Security benefit levels available at full retirement age of 66. Most retirees who left jobs with a DB pension received or deferred lifetime benefits, but only 6 percent of those with a DC plan chose or purchased an annuity at retirement. Those in the middle income group who had savings typically drew down those savings gradually. Nonetheless, an estimated 3.4 million people (9 percent) aged 65 or older in 2009 had incomes (excluding any noncash assistance) below the poverty level. Among people of all ages the poverty rate was 14.3 percent. To help people make these often difficult choices, policy options proposed by various groups concerning income throughout retirement include encouraging the availability of annuities in DC plans and promoting financial literacy. Certain proposed policies seek to increase access to annuities in DC plans, which may be able to provide them at lower cost for some individuals. However, some pension plan sponsors are reluctant to offer annuities for fear that their choice of annuity provider could make them vulnerable to litigation should problems occur. Other proposed options aim to improve individuals' financial literacy, especially to better understand risks and available choices for managing income throughout retirement in addition to the current emphasis on saving for retirement. Proposed options include additional federal publications and interactive tools, sponsor notices to plan participants on financial risks and choices they face during retirement, and estimates on lifetime annuity income on participants' benefit statements.
The collapse of the Soviet Union in 1991 heightened U.S. policymakers’ concerns about the dangers posed by the Soviet Union’s arsenal of nuclear, chemical, and biological weapons. The U.S. government is concerned that unemployed former Soviet Union weapons scientists pose a significant risk to nonproliferation goals because they may provide their weapons-related expertise to countries that are trying to develop weapons of mass destruction (known as countries of proliferation concern), criminal elements, or terrorist groups. It has been estimated that about 1 million scientists and engineers were employed in Russia’s 4,000 scientific institutes. Public Law 103-87, “The Foreign Operations, Export Financing and Related Programs Appropriations Act, 1994” made funds available for a cooperative program between scientific and engineering institutes in the former Soviet Union and the Department of Energy’s (DOE) national laboratories and other qualified institutions in the United States. In response to the act, DOE undertook a program to curb the potential for proliferation posed by weapons scientists in the Newly Independent States (NIS) of the former Soviet Union through the Industrial Partnering Program. The name of this program was changed to the Initiatives for Proliferation Prevention (IPP) in 1996. The purpose of the program is to stabilize the technology base in these countries as they attempt to convert defense industries to civilian applications. Immediate near-term attention was to be focused on institutes and supporting activities that would engage NIS weapons scientists and engineers in productive nonmilitary work. The program was expected to be commercially beneficial to the United States and the NIS. IPP was also expected to promote long-term nonproliferation goals through the commercialization of NIS technologies. While commercial benefit is a major emphasis of the program, the nonproliferation goals of the IPP program are the foundation for all program activities. In 1998, DOE initiated another program that has complementary goals and focuses on creating jobs in 10 cities (commonly referred to as the nuclear cities) that formed Russia’s nuclear weapons complex. This program, known as the Nuclear Cities Initiative, is discussed in more detail in chapter 4. It has been estimated that Russia’s 10 closed nuclear cities contain about 1 million inhabitants. This total includes the families of the closed cities’ weapons scientists and support personnel, such as teachers and technicians. The cities are called “closed” because access to them is restricted and they are geographically isolated. These cities have performed the most sensitive aspects of nuclear weapons production. Two of the cities, Arzamas-16 (now Sarov) and Chelyabinsk-70 (now Snezhinsk), are primarily research institutes, responsible for weapons design. The remaining eight were originally production facilities and are now involved in dismantling weapons and in securing and disposing of nuclear materials. The director of DOE’s Office of Nonproliferation and National Security stated that the IPP program’s main objectives are to (1) identify and develop nonmilitary applications for NIS defense technologies and (2) create long-term jobs for NIS weapons scientists and engineers in the high-technology commercial marketplace. DOE defines a weapons of mass destruction scientist or engineer as an individual with direct experience in designing, developing, producing, or testing weapons of mass destruction or the missile systems used to deliver these weapons. While not all workers on a project are required to satisfy the weapons of mass destruction requirement, the majority of the scientific personnel should have experience related to such weapons. The national laboratories, which supervise IPP projects are responsible for ensuring that NIS facilities and personnel were directly linked to weapons of mass destruction. The program focuses on preventing the proliferation of nuclear weapons but also addresses certain aspects of NIS chemical and biological warfare systems. The program aims to use about 70 percent of its funding for nuclear-related projects and 30 percent for chemical and biological projects. An underlying principle of IPP is that the program is expected to have an “exit strategy” to limit U.S. government involvement. By serving as a catalyst to forge industrial partnerships between U.S. industry and NIS institutes, the program anticipated “handing off” commercial activities to the marketplace as they evolved and matured. In this sense, IPP was expected to provide the seed money that would lead to self-sustaining business ventures and help create a climate that would foster long-term nonproliferation benefits. The IPP program is one of a number of U.S. nuclear nonproliferation programs focusing on the NIS. According to DOE officials, the program is limited in scope and is not designed to address the total problem posed by unemployed weapons scientists. Table 1.1 provides information on the various U.S. nonproliferation programs focusing on the NIS. According to DOE officials, IPP complements these other programs. Department of State officials, who oversee the U.S. portion of the International Science and Technology Center (ISTC) program, which also provides funds to NIS weapons scientists, said the two programs share similar objectives and can have a mutually beneficial effect. The programs do have some important differences. For example, ISTC is a multilateral program, funded by several countries and organizations, while IPP is a bilateral program, funded solely by the United States. Unlike ISTC, which is implemented by an intergovernmental agreement, IPP is implemented through a series of national laboratory contracts with NIS scientific institutes and laboratories. IPP is implemented by DOE headquarters, DOE’s national laboratories, and U.S. industry partners. The program is managed at DOE headquarters by an office director and is part of DOE’s Office of Arms Control and Nonproliferation. The director has a staff of seven technical and support personnel. In addition, the office has five technical and support personnel who work on the recently established Nuclear Cities Initiative. The IPP program office is responsible for the program’s overall direction, DOE and interagency coordination, final project approval, and budgetary matters. DOE’s multiprogram national laboratories, plus the Kansas City Plant, play a major role in the day-to-day operations of IPP. IPP projects are assigned to national laboratory scientists, known as principal investigators, who (1) develop the projects with Russian scientists, (2) provide technical oversight for the projects, and (3) provide testing and technical confirmation of projects’ results when required by U.S. industry. Each laboratory also has an IPP program manager who monitors the laboratory’s IPP projects. An interlaboratory board was established in 1994 to coordinate, review, and facilitate the activities of the national laboratories and provide recommendations to DOE headquarters on the execution of the IPP program. Program managers from each national laboratory make up the interlaboratory board. An interlaboratory chairman is appointed for a 1-year period. The current chairman is from the National Renewable Energy Laboratory. Table 1.2 shows the distribution of IPP projects and associated funding among the national laboratories as of December 1998. A consortium of U.S. industry participants, called the United States Industry Coalition (USIC), was established in 1994 to promote commercialization with the NIS. USIC is a private nonprofit entity headed by a president and board of directors and includes U.S. companies and universities. (See app. I for a list of the USIC members as of Sept. 30, 1998). In order to participate in the IPP program, a company is required to become a member of USIC and pay dues based on its size. The dues structure is as follows: Small companies pay $1,000 for a 2-year period; consortiums and universities pay $2,000 for a 1-year period; and large companies pay $5,000 for a 1-year period. The IPP program comprises over 400 funded projects. These projects represent collaborative activities among DOE’s national laboratories, U.S. industry partners, and NIS institutes. The purpose of the activities is to convert NIS defense industries to commercial civilian applications. NIS nuclear, biological, and chemical weapons facilities are supposed to be the recipients of IPP funding. Also eligible are facilities that were associated with the development and production of strategic delivery systems or strategic defense systems. IPP projects are categorized in three phases—Thrust 1, Thrust 2, and Thrust 3. The first phase is geared toward technology identification and verification. Thrust 1 projects are funded by the U.S. government and focus on “lab to lab” collaboration, or direct contact between DOE’s national laboratories and NIS institutes. The second phase involves a U.S. industry partner that agrees to share in the costs of the project with the U.S. government to further develop potential technologies. The principal instrument used by DOE to promote partnerships is the cooperative research and development agreement. The U.S. industry partner is expected to match funds provided by DOE. Industry costs can include in-kind support, such as employee time and equipment. Projects that do not receive any financial support from the U.S. government, known as Thrust 3, are expected to be self-sustaining business ventures. According to DOE, 413 IPP projects had received funding as of December 1998. About 170 NIS institutes and organizations have been involved in the IPP program. The distribution of the projects among the three phases—and the associated funding levels—is shown in table 1.3. The IPP program is focused on four NIS countries—Russia, Ukraine, Belarus, and Kazakhstan. The bulk of the program’s effort is concentrated on Russia. About 84 percent of the funded projects are related to Russia, as shown in figure 1.1. IPP projects evolve from various sources. According to DOE and national laboratory officials, projects are proposed primarily by NIS scientists, laboratory officials, and U.S. industry. DOE, national laboratory, and State Department officials noted that many early IPP projects were “off the shelf” ideas of the national laboratories that heavily favored basic science with limited commercial potential. IPP’s former program director told us the program’s first priority was to initiate immediate projects at key NIS institutes to stabilize personnel who were facing the threat of economic dislocation. The idea was to get as many projects as possible under way in as short a time as possible. He noted that a key element in selecting early projects was to learn as much about the facilities and personnel as possible to promote and increase transparency at the NIS weapons institutes. In mid-1995, less than a year after IPP received its first year’s appropriation of $35 million, 175 Thrust 1 projects and 29 Thrust 2 projects had received almost $20 million. Before they are approved for funding, all proposed IPP projects are reviewed by DOE’s national laboratories, DOE headquarters, and a U.S. government interagency group comprising representatives of the departments of State and Defense and other agencies. A project is initially reviewed by the DOE national laboratory that proposed the project. After passing the initial review, the project is further analyzed by the interlaboratory board and its technical committees. The project is then forwarded to DOE headquarters for review. DOE, in turn, consults with the Department of State and other U.S. government agencies for policy, nonproliferation, and coordination considerations. DOE headquarters is responsible for making the final decision on all projects. According to its former director, the IPP program (1) faced continuous funding shortfalls, (2) was not adequately supported by DOE management, (3) faced confusion about the appropriate relationship between the national laboratories and U.S. industry over the commercialization of NIS technology, and (4) had poor relations with the State Department. Furthermore, the former program director noted that DOE management did not provide adequate support services, failed to recognize the program’s successes, and was unwilling to support budget levels consistent with DOE’s original commitments. He also noted that DOE management failed to address a series of problems with the State Department until irreparable damage had been done. These alleged problems ranged from broader policy-level issues to administrative matters, such as lack of support in processing country clearances for DOE visits to the NIS. The Department of State’s Senior Coordinator for Nonproliferation Science Programs told us that constructive engagement between the two agencies ceased and employees of both became embroiled in personality conflicts. According to the former IPP program director, DOE did not adequately address these impediments in total, indicating that DOE did not consider the IPP program to be a high-priority nonproliferation activity. DOE and State Department officials acknowledged that the IPP program had difficulties in the early years but maintained that the situation has improved markedly with the appointment of a new IPP program director in September 1997. The new program director told us that he has the full support of DOE management and the IPP program has improved relations with the Department of State. In the midst of these problems, DOE commissioned two reviews of the program by private contractors. The first study, which cost $10,000, was completed in August 1997, and the second, which began shortly after the first review was completed in October 1997, cost $99,985. The studies identified many similar programmatic weaknesses, including flaws in program management, oversight, and failure to commercialize projects. Recommendations to improve the program included obtaining the support of DOE management for the IPP program, establishing commercialization priorities and developing a commercialization model, incorporating commercialization criteria in project approvals, repairing relationships with other U.S. government entities, reaching out aggressively to industrial and financial firms, and restructuring the USIC model to enhance commercialization potential. According to the program director, since his appointment, he has implemented almost all of the recommendations. He further noted that program staff have been upgraded so that headquarters can assume control of financial and program management responsibilities from DOE’s national laboratories and Albuquerque field office. The Chairman of the Senate Committee on Foreign Relations asked us to review (1) the costs to implement the IPP program for fiscal years 1994-98, including the amount of funds actually received by NIS scientists and institutes; (2) the extent to which IPP projects are meeting their nonproliferation and commercialization objectives; and (3) DOE’s Nuclear Cities Initiative. To determine the purpose and scope of the IPP program, we reviewed DOE and State Department program files, discussed the program with various DOE officials, and met with U.S. industry officials. We met with the former director of the IPP program to obtain information about its history and also had numerous discussions with the current IPP director and members of his staff. We also met with the directors of DOE’s Office of Nonproliferation and National Security and Office of Arms Control and Nonproliferation. We obtained information on the IPP program from Sandia National Laboratory, Los Alamos National Laboratory, and Argonne National Laboratory. At the Department of State, we met with the Special Adviser to the President and the Secretary of State on Assistance to the Newly Independent States and his staff. We also met with State’s Senior Coordinator for Nonproliferation, Science Programs, and with various officials from the U.S. Embassy, Moscow. In addition, we interviewed several U.S. industry representatives who have been associated with the IPP program, including the former presidents of the U.S. Industry Coalition and officials from the University of New Mexico who provided administrative support to the coalition. To identify the IPP program’s costs for fiscal years 1994-98, we obtained data from DOE’s IPP program office and national laboratories. We discussed these data with budget and program analysts from DOE’s Office of Nonproliferation and National Security. To assess the extent to which the IPP program was meeting its nonproliferation and commercialization objectives, we judgmentally selected 79 IPP projects valued at $23 million. Of the 79 projects, 70 were with Russia, 7 were with Ukraine, and 2 were with Belarus. Of the projects reviewed, 46 were Thrust 1, 30 were Thrust 2, and 2 were Thrust 3. One project was described as program directed and did not have an associated thrust level. The projects were managed by five DOE laboratories—Argonne National Laboratory, Los Alamos National Laboratory, the National Renewable Energy Laboratory, Oak Ridge National Laboratory, and Sandia National Laboratory. (See app. II for a list of the projects.) We based our selection of projects on a number of factors. For example, we chose our projects from five DOE national laboratories that accounted for 57 percent of all funded IPP projects. The dollar size of projects was also a consideration. We chose projects whose allocations ranged from $30,000 to $1.4 million. In addition, we included the number of NIS scientists employed on the projects among our selection criteria. Furthermore, we asked DOE to provide us with a list of IPP projects that would be useful to review. DOE queried several national laboratories and provided that list to us. Whenever possible, we included these projects in our sample. We also provided DOE with a list of proposed projects that identified the Russian institutes we planned to visit. DOE officials said that the projects we chose represented a fair sample of IPP projects. We used the IPP information system to identify IPP projects. The database was developed and maintained by Los Alamos National Laboratory. The system holds data on all funded IPP projects as well as draft proposals. Members from the national laboratories and the Kansas City Plant, DOE headquarters, the Department of State, and many U.S. companies that are members of USIC have access to the system. For the projects we selected for our sample, we did find some inconsistencies, inaccuracies, and incomplete data. However, we did, whenever possible, obtain corrected data through follow-up discussions with the principal investigators at each U.S. laboratory and with Russian officials. To assess the impact on U.S. nonproliferation goals of the IPP program, we met or spoke with the principal investigator for each IPP project. We used information contained in DOE’s IPP information system to determine the extent to which each project focused on critical nonproliferation objectives, such as the number of weapons scientists engaged in the project and its potential commercialization benefits. We discussed with the principal investigator how the project was meeting these objectives and what role the investigator played in monitoring the project. We met or spoke with principal investigators from Los Alamos National Laboratory, Sandia National Laboratory, Argonne National Laboratory, Oak Ridge National Laboratory, the National Renewable Energy Laboratory, and the Kansas City Plant. In several instances, we contacted U.S. industry officials to follow up on the status of commercialization activities. For example, we discussed selected projects and related commercial activities with U.S. industry officials from RUSTEC, Inc. (Camden, New Jersey); Energy Conversion Devices, Inc. (Troy, Michigan); Bio-Nucleonics (Miami, Florida); TCI, Inc. (Albuquerque, New Mexico); and Raton Technology Research, Inc. (Raton, New Mexico). We visited Moscow and St. Petersburg, Russia, in September 1998 to meet with government and institute officials about the program and selected IPP projects. We focused our visit on Russia because over 80 percent of all funded IPP projects are there. We met or communicated with representatives from the Russian Ministry of Atomic Energy and 18 institutes and organizations that receive IPP funds. We met with the following organizations in the Moscow area: Entek (Research and Development Institute of Power Engineering), the Kurchatov Institute, the Research Institute of Pulse Technique, KVANT/Sovlux, the All-Russian Scientific Research Institute of Natural Gases and Gas Technologies (VNIIGAZ), the Gamaleya Institute of Epidemiology and Microbiology, the Institute of Nuclear Research, the All-Russian Scientific Research Institute of Inorganic Materials (VNIINM), the Engelhardt Institute of Molecular Biology, and the Institute of Biochemistry and Physiology of Microorganisms. In St. Petersburg, we met with the following organizations: the St. Petersburg State Electro Technical Institute, the V.G. Khlopin Radium Institute, the Ioffe Physico Technical Institute, and the Association of Centers for Engineering and Automation (St. Petersburg State Technical University). We also met with officials from the All-Russian Scientific Research Institute of Experimental Physics (Sarov). In addition, we met in the United States with officials visiting from two other Russian institutes—the N.N. Andreyev Acoustics Institute and the Landau Institute of Theoretical Physics. We also had discussions with the director general of the State Research Center of Virology and Biotechnology (VECTOR). See appendix III for more information about each institute we visited. One problem we encountered in doing our work was that we were denied access to Sarov, a closed nuclear city in Russia. We had planned to visit the city to learn more about its economic conditions and review several IPP projects. We had been granted access to visit the city, including obtaining the required entry and visa documents. Furthermore, IPP contracts with NIS institutes have a provision that allows for audits by GAO. After we had arrived in Russia, however, we were informed that the visit had not been cleared by Russia’s Federal Security Bureau (formerly known as the KGB) and we would not be permitted to enter Sarov. Representatives from Sarov, however, traveled to Moscow to meet with us. They told us that they wanted us to visit their city but did not have the final approval authority. We performed our work from February 1998 through February 1999 in accordance with generally accepted government auditing standards. As of June 1998, institutes in the Newly Independent States (NIS) had received about 37 percent of all IPP funding. About 51 percent of the program’s funds have gone to DOE’s national laboratories, and 12 percent have supported U.S. industry’s participation in the program. The portion allocated to DOE’s laboratories goes for the salaries of scientists engaged in the IPP projects, as well as for laboratory overhead charges. In Russia, scientists and others working on IPP projects received less than 37 percent of IPP funds because of various Russian taxes and administrative overhead charges on IPP funds at their institutes. DOE officials told us that they view the Russian taxes as costs over which they have no control and consider administrative charges an acceptable program cost. For the IPP program to achieve its goals, DOE officials told us it should be funded at about $50 million per year. At that level, they believe the program could be phased out by 2007. However, the program has never received that much funding in any one year. For example, in fiscal year 1994, the IPP program received its largest amount—$35 million. DOE is developing a strategic plan to establish goals for the IPP program and a means of measuring its accomplishments. Most IPP funds have gone to DOE’s national laboratories to cover (1) the costs of scientific research related to IPP projects (2) the costs of developing or monitoring the projects, and (3) various kinds of administrative and overhead charges. As indicated in figure 2.1, an analysis of the program’s expenditures from fiscal year 1994 through June 1998 shows that 51 percent, or $32.2 million, of the $63.5 million spent on the IPP program has gone to reimburse DOE laboratories. U.S. Industry Coalition administrative support $23.7 million = NIS expenditures. $10.8 million = DOE laboratories’ direct project cost. $21.4 million = DOE laboratories’ administrative and overhead cost. $7.6 million = U.S. Industry Coalitiion’s administrative cost. The direct costs of DOE laboratories for projects ($10.8 million, or 17 percent of all program expenditures) include funds used for the salaries and travel costs of DOE laboratory researchers during the time they worked on specific IPP projects. Principal investigators at the DOE laboratories told us they and their staff spent time conducting research related to the projects or monitoring the NIS contracts. IPP projects were usually not the main responsibility of the principal investigators. In several cases, they told us they spent about 5 to 10 percent of their time monitoring an IPP project. Furthermore, they said they spent most of this time during the early stages of the project, developing the paperwork necessary to get the project started. Besides the funds attributable to the principal investigators and their research staff at DOE laboratories, a small portion of IPP funds was allocated for equipment and materials. However, the bulk of the expenditures for DOE laboratories went for administrative support fees. Totaling $21.4 million, these expenditures represented 33.7 percent of total program expenditures. The support fees include a portion of laboratory overhead, including the salaries and travel expenses of the IPP program managers, who coordinate the program among scientists at each laboratory; various standard administrative and support costs, paid to the contractor that operates the laboratory; another administrative charge, specifically for this program, taken from the funds earmarked for institutes in the Newly Independent States; and materials and subcontracts purchased in the United States and valued at $2 million. The director of the IPP program told us he was concerned about the laboratories’ costs for operating the program and the length of time to receive financial information from some of the labs. The director of the Office of Nonproliferation and National Security and other DOE officials told us that they believe laboratory overhead should be reduced to maximize the amount of money received by NIS weapons institutes. The director also told us that although her office supported funding the principal investigators, IPP should not be a jobs program for DOE’s national laboratories. The Department of State’s special adviser on assistance to the NIS told us that while he supported the goal of IPP, he questioned how valuable the laboratories are in promoting the goals and objectives of the program and said that questions should be raised about the extent and duration of the laboratories’ involvement. Until the end of fiscal year 1998, the University of New Mexico provided administrative services to the U.S. Industry Coalition (USIC), the consortium of industry partners interested in cooperating with DOE on IPP projects with the Newly Independent States. DOE’s costs for the University of New Mexico’s participation totaled about $7.6 million through June 1998. DOE anticipated that the consortium would become self-sustaining after 5 years, following strategic investments in successful IPP projects. According to DOE officials, the university never fulfilled the role envisioned for it, and its staff generally did not possess the required expertise. DOE decided to terminate funding for the university as of September 30, 1998. DOE and the University of New Mexico agreed that the university’s resources were not well suited to support IPP’s increased emphasis on commercializing projects. The university may, however, provide some support services to IPP in the future. IPP program officials and industry members of USIC, the chartered corporation, told us that USIC should still play a role in promoting the commercialization of NIS technologies. On October 1, 1998, DOE entered into an agreement with USIC to pursue commercial efforts with the NIS. USIC is currently organizing an office in Washington, D.C., to carry out its responsibilities. DOE has agreed to support USIC’s operations through September 30, 1999, at a cost of $1.6 million. As of June 1998, about 37 percent, or $23.7 million, of the program’s expenditures had been used to pay for work at NIS institutes; however, not all of these funds are reaching weapons scientists, engineers, and technicians who work on IPP projects. After a DOE laboratory wires a payment of funds to a bank designated by a Russian institute—a step DOE takes when a principal investigator is satisfied that a segment of work on a project is complete—the bank may charge a fee, some taxes may be paid, and the institute may take some of the funds for general overhead expenses. When a Russian scientist finally receives a payment, the individual may have to pay additional taxes on that income. Although DOE has sometimes tried to help the institutes avoid or postpone tax payments, it is unclear how successful such efforts have been. During our review, we found that principal investigators at DOE laboratories often did not know how much IPP funding their Russian counterparts received. Neither DOE nor its laboratories require any receipts or other explanation from the Russian institutes to show how the funds sent to Russia are allocated. Financial officials and others at the DOE laboratories are satisfied if they have documentation that the funds went to the designated bank account for the NIS institute. Principal investigators told us that their role in monitoring the contracts was mainly to establish the contracts or monitor the technical work products of the NIS researchers. DOE does not have detailed records of the amounts of IPP funding received by individual scientists, engineers, and technicians in the NIS, and therefore it is uncertain how much of the funding supplements their salaries. However, at Russian institutes, according to a March 1998 DOE report to the Congress, the average IPP recipient receives about 47 percent of the funds provided to the institute. The remainder typically goes for various payroll taxes—pensions, medical insurance, and the equivalent of Social Security—along with 7 to 18 percent for the institute’s overhead costs. In addition, the IPP recipient’s salary may be subject to an income tax of 12 to 35 percent. The director of the IPP program said that overhead payments to the institutes were justified as long as they were reasonable because they helped to stabilize the institutes. Even if all of the funds destined for the Newly Independent States are not allotted for salaries, DOE officials said the funds are being used mostly to achieve the goal of stabilizing the institutes. At several of the 15 institutes we visited in Russia, we attempted to determine how much IPP funding each institute received and how the funding was allocated at each institute. Although we were not usually provided with documentation to review, in general, Russian officials told us that the funds received by the institutes went for taxes, administrative and overhead costs, and salaries. An analysis of the information provided to us indicated that the amount of IPP funding reaching weapons scientists and technicians at the institutes varied. For example, we were told at one institute that none of the IPP funds went for salaries; instead, the funds were used for overhead, travel, computers, and Internet access. (See app. IV for additional information on how funding was allocated at Russian scientific institutes). We also met with the director of a Russian institute who was visiting the United States and participated in the IPP program. He told us that he did not receive the amount of funding that DOE’s information showed going to his institute. Our review of the project found that (1) DOE’s information was inaccurate, (2) laboratory officials responsible for the project did not know how much went to the institute, and (3) half of the funds allocated to the Russian institute went to a U.S. company instead. We discussed this project with DOE officials. They told us that they investigated the case, with the assistance of their General Counsel, because of the concerns we raised. DOE found that a number of actions occurred during the course of the project that were contrary to IPP policies and practices and said that they will not be allowed to recur. A discussion of this IPP project follows: DOE’s IPP database showed that the N.N. Andreyev Acoustics Institute, in Moscow, received $68,200 of the $99,700 spent for the demonstration of an acoustic nozzle developed at the institute. However, the director of the institute told us that the institute actually received $27,000. According to the director, about 40 percent of the $27,000 was allocated for the salaries of scientists and others participating in the project. For example, the Russian inventor of the nozzle received $5,000 (equal to about 50 months’ salary), or about 5 percent of all IPP funds spent on the project. The remainder of the $27,000 went for taxes in Russia and the institute’s overhead. Records supplied by Argonne National Laboratory show that it paid out $60,000 rather than $68,200 in February 1998. The IPP program director at Argonne said that the IPP database showed $68,200 was spent for the NIS institute, but $8,200 of that amount was part of a $39,700 payment to Argonne, not to the Russian institute. According to the DOE laboratory’s records, about $60,000 went to a bank account designated by the Russian institute. However, the manager of Argonne’s IPP program said he suspected that the Russians received less than half of the $60,000. This is because Argonne transferred the $60,000 to a U.S. company that represented the Russian institute. Argonne officials, including the internal audit manager who reviewed the laboratory’s records on our behalf, told us it was unclear how much of the $60,000 went to the Russian institute or its personnel. The U.S. company became the institute’s exclusive agent for acoustic activities in North America the same week in February that the agreement with the DOE lab was finalized. The company provided us with documents stating that the Russian institute would receive $30,000 and the U.S. company would receive the remaining $30,000. According to a letter the company sent the Russian institute on April 20, 1998, the Russian share included (1) $4,368 for equipment and travel costs for two institute officials visiting the United States, (2) $2,500 for the institute’s share of program and demonstration set-up costs, and (3) $23,131 for the Russian institute’s costs. In general, representatives of the Russian institutes we visited said it was typical for a portion of the IPP funds to be used for taxes. The March 1998 DOE report to the Congress on Russian taxation of the IPP program described the tax situation for IPP as a problem, but not as debilitating.According to the report, there was no comprehensive mechanism that guaranteed tax exemption for U.S. nonproliferation programs, but a temporary agreement between the United States and Russia, known as the Panskov-Pickering Agreement, provided for deferring taxes. In many instances, however, Russians involved with the IPP program were not aware of the temporary agreement on income tax deferment and therefore did not contact the U.S. embassy to obtain it. In other cases, local authorities ignored the agreement, according to the DOE report. By July 1998, according to a DOE official, the Russian State Tax Service said that the agreement was no longer valid and all postponed taxes were due; however, the agreement was reinstituted in November 1998. A DOE official said that if the Russian Duma ratifies and the Russian President approves a bilateral agreement, signed by the United States and Russia in 1992 and providing exemptions from some Russian taxes for U.S. aid, then the tax deferments under the Panskov-Pickering Agreement may become permanent. Unlike the IPP program, some aid programs to Russia, such as the ISTC program, provide assistance that is exempt from Russian taxes because of an intergovernmental agreement. DOE officials said that while the ISTC program does not pay taxes because of an intergovernmental agreement, all projects, including those of the ISTC, may still involve some customs duties, bank fees, and taxes at the local if not at the national level. As shown in table 2.1, funding levels for the IPP program have varied. In fiscal year 1994, the program’s initial year, IPP received its highest annual level of funding, $35 million. In the following year, it was not funded. DOE officials believe the program needs more consistent funding and say they see a need for a program plan with adequate performance measures. DOE officials hold a variety of views on when to end the IPP program. In part, their views depend on the program’s receiving adequate funding and accomplishing its mission. The former director of the program told us he believed the program could have ended after 5 years if it had received adequate funding. Originally, he anticipated that it would receive $50 million per year and become self-sustaining after 5 years. The current director of the program also told us in February 1998 that the program could end by 2006 if it was adequately funded at about $50 million per year. However, in June 1998 he said that funding the program and then terminating it after 5 years was artificial. He said the program should be continued as long as it is useful and meets a need. The director of DOE’s Office of National Security and Nonproliferation said that she would like to see the IPP budget increased to $50 million per year. She believes that amount would be sufficient for DOE to make a significant impact on nonproliferation and commercialization and to end the program. She believes that adequate funding could lead to a phaseout by 2007. She noted that as DOE closes in on the 2000 time frame, it will be time to take a hard look at IPP, just as DOE will take a look at its other nonproliferation programs. The successful completion of the program depends on identifying the goals of the program and determining when they have been achieved. The director of the program is developing program goals and a strategic plan. In February 1998, the director said the program was changing how it planned to measure performance. He noted that the program has to be results oriented if it is to succeed. In the past, the most commonly used measures of the program’s success included the number of projects, the amount of funds a project provided to the NIS, and the number of institutes engaged. These measures would continue to have some use, according to the director, but IPP must employ more meaningful measures that show results. Consequently, he was looking at measures such as the number of patents issued for projects or the number of companies created. The director said the strategic plan will include about a dozen ways to measure performance. As of January 1999, the IPP program had developed a draft strategic plan, which includes some performance measures. Possible program measures include, among other things, (1) the amount of funds spent, (2) the number of NIS employees engaged in the IPP program, and (3) the number of job opportunities created. Possible commercialization measures include (1) the number of Thrust 3 projects, (2) the amount of private-sector funding for Thrust 2 and Thrust 3 projects, and (3) the number of commercial patent applications. Russian officials participating in the IPP program told us that IPP program funds are helping to prevent some institutes from closing and are supplementing the salaries of some scientists. However, numerous obstacles, such as a lack of capital and markets, are preventing the program from achieving its long-term goal of successfully commercializing IPP projects. DOE’s implementation and oversight of the IPP program raises concerns. For example, program officials are using inconsistent and imprecise methods to identify the number and background of NIS scientists and institutes receiving IPP funding. As a result, some institutes receive IPP funds, even though they are not associated with weapons research and development programs. In addition, IPP projects are not just directed to former weapons scientists. In some cases, scientists currently working on Russia’s weapons of mass destruction program are receiving IPP program funds to supplement their salaries. Some of the projects we reviewed also had “dual-use” implications that could yield unintended, yet useful, defense-related information. Furthermore, some U.S. officials responsible for reviewing proposed IPP projects related to chemical and biological research told us that they did not always receive enough information from DOE to adequately review the projects. In general, officials at the 15 Russian institutes we visited were supportive of the program. Officials from three institutes told us that the IPP program had prevented their laboratory or institute from shutting down and reduced the likelihood that scientists would be forced to seek other employment. A representative from Sarov told us that without the IPP program, the situation at the institute would be a disaster. An official from the Research Institute of Pulse Technique said the IPP funding added $200 per month in salary and benefits for each employee assigned to the project, a significant amount for a Russian scientist. Some institute officials told us that the benefits of the IPP program went beyond financial support. For example, the general director of the St. Petersburg State Technical University said the IPP project on metal recycling has helped teach the university how to do business with the United States. Given the dire financial and physical conditions at some of these locations, it is not surprising that institute officials were grateful for IPP funds. At several institutes we saw poorly lit, unheated work space and laboratories, aging equipment, crumbling floors, and peeling paint. Furthermore, some institute officials told us that their workers had not been paid in several months and salaries had been eroded by the recent devaluation of the ruble, the Russian currency. For example, officials from the city of Sarov, which contains a major Russian nuclear weapons design facility, told us that the average monthly salary was about $200. The recent devaluation of the ruble, however, has reduced the actual value of the salary by about half. To date, no IPP projects can be classified as long-term commercial successes, and only a few have met with limited success. Overall, of the over 400 funded projects, only two have achieved Thrust 3 status (as potential self-sustaining business ventures) and 79 are categorized as Thrust 2 (an intermediate step toward commercialization). Even the Thrust 3 projects that we reviewed have not achieved the type of commercial success envisioned by DOE. In fact, one of these projects, which is designed to help one of Russia’s closed nuclear cities develop material used in the production of silicon chips, does not have a U.S. industrial partner and faces an uncertain future. DOE and national laboratory officials told us that when the program was started, there was a general expectation that most projects would not graduate from Thrust 1 to Thrust 2 to Thrust 3. According to DOE data, 31 Thrust 1 projects have evolved to Thrust 2, and 1 project has evolved from Thrust 2 to Thrust 3. Plans for the IPP program envisioned, however, that projects would move from Thrust 2 to Thrust 3 in 3 years. The IPP program director told us he was disappointed that more projects have not evolved more quickly. He indicated that there were too many ongoing Thrust 1 projects with little or no commercial potential. He said, however, that the limited commercial success of the IPP projects is not surprising in view of the difficulties involved in commercialization.According to the director, commercializing science and engineering projects is very difficult in the United States and much more difficult in Russia. He noted that commercializing a new specialty chemical or polymer can take from 6 to 8 years in the United States. IPP projects do not have to start at the Thrust 1 phase. DOE officials are now stressing the commercialization of projects and told us that projects should have a U.S. industry partner identified at the conceptual stage. The director of DOE’s Office of Arms Control and Nonproliferation told us that if a project does not have a clear commercial objective, he will not approve it unless there is an overriding national security consideration. We found that many factors affected commercialization, including a lack of capital, the lack of a clearly defined goal for achieving commercial success, the inadequate training of NIS scientists in business-related skills, limited markets, and concerns about intellectual property rights. The difficulties of commercializing IPP projects have increased with the recent economic crisis in Russia. We found some IPP projects with limited commercial success—that is, a product has been developed and appears marketable, but customer demand for the products has generally not been established. A few projects we reviewed showed commercial potential and had interested U.S. industry partners. These included (1) a metals recycling partnership between U.S. industry and a Russian entity, (2) a photovoltaic cell renewable energy production project, and (3) a technology to eliminate insects from Russian lumber. For the first two projects, the U.S. industry-NIS partnerships were established before the partners began to participate in the IPP program. (See app. V for more information on these and other IPP projects.) Several institute officials told us that current economic conditions in Russia discourage commercialization and investment. Some institute officials told us that Russian banks had frozen their assets and they were unable to be paid for work being done under IPP projects. Worsening economic conditions compound the difficulties associated with investing in Russia. According to the director general of the Khlopin Institute, it is unrealistic to expect that nuclear scientists trained under the Soviet system can easily make the transition to a market-based economy. He also believed that DOE’s national laboratories were not well equipped to promote commercialization in Russia. A couple of DOE national laboratory officials told us that they did not have the background and skills needed to fully implement commercialization programs in the NIS. The IPP program director at Sandia National Laboratory told us that the laboratories have done a good job of identifying potential projects and U.S. industrial partners. However, a laboratory is not the place to raise venture capital and develop markets for products because a laboratory does not have that kind of expertise. The actual commercial development must come from U.S. industry. According to the general director of the St. Petersburg State Technical Institute, Russia needs an infrastructure in place before it can undertake significant commercialization activities. He said that, in the long-term, Russia needs to develop a cadre of managers who know how to deal in a market economy. Without such managers, commercialization will not take place on a broad scale in Russia. Despite the limited success in commercializing IPP projects, DOE officials told us that the program has been successful because it has at least temporarily employed thousands of weapons scientists at about 170 institutes and organizations throughout Russia and other Newly Independent States. Our review raised several concerns about DOE’s implementation and oversight of the IPP program including the adequacy of DOE’s efforts to obtain information on the background and number of NIS scientists and institutes engaged in IPP projects; the appropriateness of DOE’s supplementing the salaries of scientists currently working in Russia’s weapons of mass destruction program; the advisability of DOE’s funding projects that could unintentionally provide defense-related information to Russian and other NIS scientists; and the adequacy of DOE’s reviews of IPP projects dealing with chemical and biological research. DOE’s program guidance specifies that each project proposal should include a discussion of the background and experience of the key NIS scientists and institutes to determine that they possess the appropriate weapons of mass destruction background. The guidance also specifies that the principal investigator at the DOE laboratory is responsible for providing this information for each project. Some principal investigators told us that information on the backgrounds of the NIS scientists and engineers was not relevant to the project’s success. In two instances, they said it was “none of their business” to ask for such information, claiming that doing so would have been too intrusive or would have resulted in a breach of Russia’s national security laws. One principal investigator told us that he does not want to know the roles of the scientists because this information could jeopardize relationships and put the NIS scientists at risk for revealing such information. At one national laboratory, the IPP program director said the laboratory does not generally ask about scientists’ background because of concerns about undermining the potential success of a project. During our visit to Russia, we asked for and received background information on scientists from officials at some institutes. Representatives from Sarov told us that it was not a violation of Russia’s laws to provide background information, provided that a request was limited to general information about the scientists’ nuclear weapons-related activities. DOE’s IPP program director told us that the principal investigators monitor the projects very closely, helping to ensure accountability. However, we found that the degree of oversight varied among the U.S. laboratories. In general, the principal investigators told us that they monitor the projects through contract deliverables (end products) received from the institutes, such as technical reports. A principal investigator is satisfied that an institute has complied with the terms of the contract between the national laboratory and the NIS institute upon (1) receiving the required deliverable(s) and (2) ensuring that the institute has met other technical expectations. Generally, the principal investigators did not believe their role included verifying the number of scientists working on a project or trying to determine if the scientists were performing weapons-related work while receiving IPP funding. A Sandia National Laboratory principal investigator told us that he was not concerned about the number of NIS scientists who were involved in the project as long as the institute met the technical requirements of the contract. From the projects we reviewed, it was not always clear how NIS institutes and scientists were selected for IPP funding. DOE and laboratory officials told us that at the beginning of the program, it was important to get as many projects as possible under way in as short a time as possible. They noted that part of the initial phase of the program was focused on learning about the NIS institutes. A State Department official told us that IPP has not focused consistently on the most critical weapons institutes. This official told us she is uncertain that IPP program officials always ask the right questions about reaching the highest-priority NIS scientists when screening projects for funding. The president of the Kurchatov Institute, in Moscow, told us that, in general, IPP projects have not targeted the most critical nuclear scientists. He noted that two IPP projects that DOE identified as being highly successful have not focused on important weapons scientists and that nonproliferation efforts to date have been ad hoc, with no real strategy in mind. The IPP program director initially told us that there is no U.S.-government-wide comprehensive, consolidated list of critical institutes and scientists that the program seeks to engage. According to the director, a list of institutes of nonproliferation interest for Kazakhstan, Ukraine, and Belarus has been developed. An interim list of Russian institutes has also been issued and continues to be refined. The director said that DOE works primarily with the national laboratories, the State Department, and other agencies to try to ensure that it is focusing on the most important nuclear institutes. However, in some cases the principal investigators were uncertain about the institutes’ roles in weapons activities. The Los Alamos National Laboratory’s IPP program director told us that sometimes the definition of a weapons of mass destruction scientist is stretched to maximize the participation of NIS scientists and institutes in the IPP program. For more than half of the projects we reviewed, we were able to determine that the institutes that performed the work had a clear affiliation to weapons of mass destruction or other defense-related activities. These institutes either had a direct connection to weapons research, design, or production or were affiliated with materials production or uranium enrichment. However, we found that in about 20 cases, the institutes that received IPP funding did not appear to have a direct association with weapons of mass destruction or defense-related activities. We were unable to determine the institutes’ backgrounds for the remaining projects we reviewed. Some projects that were not focused on weapons-related institutes included the following: At the Institute of Nuclear Research, which has participated in three IPP projects, the work has always been academic in nature, according to institute officials. They said the institute never directly performed military work. According to DOE, although the institute is not a primary weapons institute, it has conducted considerable work on the effects of radiation on electrical systems. Currently, the institute has no significant military role and has probably not had one since the early 1990s. Russia’s natural gas enterprise, VNIIGAZ, which participated in one IPP project, has performed no defense-related activities, according to officials. A national laboratory principal investigator told us that a project that focused on studying the effects of radiation contamination in Ukraine was not related to weapons of mass destruction. In the course of our review, we also tried to determine if the 15 institutes we visited, plus the key biological warfare institute in Russia, are training or have had contacts with representatives from countries of proliferation concern. We received responses from 12 of the institutes and found some evidence that contacts with countries of proliferation concern had occurred at four institutes. In one case, a researcher from an NIS biological institute, which had received IPP funds, told us that he had gone to Iran on a teaching contract. He said he did not provide any sensitive information to Iran. Another institute told us that it had provided training to Libya in 1994 on light water reactors but said that the training had taken place before the IPP project was awarded in 1996. On January 12, 1999, the Clinton administration imposed economic penalties on this institute after determining that it had provided sensitive missile or nuclear assistance to Iran. According to DOE officials, the IPP program had been withholding approval on additional projects for this institute for several months in anticipation of this recent U.S. government action. We were also told that one institute trained students from India, Pakistan, and Iran about 10 years ago. Also in 1994, the institute provided a special training course in radiochemistry for a group of about 20 students from China. An institute official said that no sensitive information had ever been included in the training courses. Finally, officials from a technical university that received IPP funds told us they are currently training students from China, India, Libya, Pakistan, Sudan, and Syria. Officials from several institutes we visited told us that they were not aware of any scientists emigrating to countries of concern to provide weapons-related services. Some institute officials told us that their employees are patriotic and would not jeopardize their own country’s national security by providing information to a rogue state. Nevertheless, Russian institute officials did note that “brain drain” is a problem. For example, Russian scientists are leaving the institutes but are emigrating to countries like the United States, Israel, and Germany for better opportunities. In addition, scientists and technicians are seeking employment in Russia’s banking and technology industries. One institute official said he is most concerned about scientists who leave the scientific field because their skills are lost forever. He said that when a scientist emigrates to another country, however, these skills are maintained. IPP program guidance specifies that the number of people employed in the NIS on IPP projects is a primary measure of the program’s success. According to program officials, the guidance clearly requires that accurate figures on the number of scientists and engineers be maintained. The national laboratories we visited—Los Alamos, Sandia and Argonne—had different methods for determining the number of NIS scientists and engineers working on IPP projects. One of the laboratories relied primarily on estimating the number of scientists by applying a formula under which the total value of the contract was divided by the scientists’ average monthly salary to arrive at the number of full-time equivalents. The other laboratories used a combination of formulas plus some form of verification, but no approach was applied systematically. In many cases, however, laboratory principal investigators knew the names of some key NIS participants as a result of prior meetings, correspondence, or reports submitted to the laboratories. According to a Sandia official, accurately tracking the number of scientists employed on projects was not considered very important at the start of the program. As a result, efforts to develop these figures were not a priority. A former Sandia principal investigator who helped implement the IPP program told us that it was never the intent of the program to identify exactly how many NIS scientists were working on a project. In some instances, principal investigators provided us with resumes and/or lists of NIS scientists engaged in the projects. Argonne officials said that they tried to get this type of information for many earlier projects because the former Argonne administrator of the program viewed it as necessary to qualify an institute for IPP funding. In one case we reviewed, national laboratory information indicated that no scientists were employed on a project. However, according to officials from the Russian institute, about 50 people were involved in the project. In several instances, information provided by the U.S. national laboratories did not indicate how many scientists were employed on a project. According to program officials, as a result of our review, principal investigators at the national laboratories are becoming reacquainted with program guidance on the need to maintain accurate information on the number of scientists receiving IPP funds. The September 1993 Report of the Senate Committee on Appropriations provides guidance on the types of NIS institutes the Congress expected would be included in the IPP program. The Committee recognized that the Russian institutes were “principally devoted to military activities” and that a loss of employment had affected “weapons scientists and engineers previously involved in the design and production of weapons of mass destruction.” DOE’s program guidance is unclear on whether funds should be going exclusively to former, or previously employed, weapons scientists or if scientists currently working on weapons of mass destruction programs are eligible to receive funding. The director of the IPP program told us that although program guidance is unclear on this point, he believes that both current and previously employed weapons scientists are eligible for program funding. We found that IPP projects are not directed solely to former weapons scientists. For example, scientists from Sarov who were participating in the IPP program and receiving salaries supplemented by IPP funds told us that they are working on weapons of mass destruction projects. Sarov’s deputy director for international relations told us that about half of the institute’s scientists and engineers who are involved in international collaboration, including the IPP program, spend part of their time working on nuclear weapons research activities. For many of the projects we reviewed, the principal investigators did not know whether the NIS scientists and engineers were working on other projects while receiving IPP funds, but several speculated that they were quite possibly doing so. IPP program directors from Sandia, Los Alamos, and Argonne said their laboratories do not know how the NIS scientists are splitting their time among various institute activities. Laboratory officials speculated that it is very likely that the scientists could be working on various other projects, including their institute’s weapons of mass destruction programs. Russian institute officials told us that in most cases, the scientists are working on the IPP projects part-time. They may also be involved in other collaborative projects with other countries and/or spending part of their time working on other projects at their institute. An official from Los Alamos National Laboratory told us that it would be unrealistic to think that Russian scientists receiving IPP funding were not also working on their own country’s weapons program. According to DOE’s program guidance, IPP projects must not, among other things, (1) include weapons and delivery system design activity and (2) provide assistance in the maintenance or improvement of military technology. Program officials said that since Russia’s technology base has been developed in the weapons program and since the goal of the IPP program is the commercial development of these technologies, there is an inherently dual-use aspect of the program. Moreover, they said, many of the projects involve materials science and any improvement in materials have inherent dual-use potential. According to program officials, no projects were undertaken that provided significant enhancements to Russia’s or other NIS’ weapons of mass destruction capability. Discussions with principal investigators and other information indicated to us that nine of the nuclear-related projects we reviewed could have dual-use implications—that is, information learned during the course of the project could unintentionally provide useful defense-related benefits to Russian and other NIS scientists. These projects, all of which were approved from 1994 through 1996, include the following: One project involved ways to improve a protective coating material. The national laboratory principal investigator told us that Los Alamos is developing the coating and is paying a Russian institute to do some of the testing. The coating has both military and civilian applications and could be used to make aircraft bodies more resistant to corrosion. He noted that the Russians could obtain information to develop a similar material by analyzing the samples that Los Alamos has provided for testing. According to DOE headquarters officials, the Russian Federation already has aircraft utilizing this technology and therefore this project does not increase that country’s defense capabilities. According to a DOE laboratory official, two IPP projects have focused on Russian electromagnetic absorbing materials technologies. According to DOE’s information, this dual-use technology presents a proliferation risk. Among other things, this technology could reduce electromagnetic noise in airports, thereby improving flight safety. In addition to potential commercial applications, these projects were designed to assess the state of the technology to determine its validity for possible application to U.S. defense systems. The projects have not gone beyond the Thrust 1 stage and were recently canceled for lack of commercial potential. IPP project funds have been used to enhance communications capabilities through high data rate electronic links among some of Russia’s closed nuclear cities and DOE’s national laboratories. While the project promotes better communications among the Russian nuclear institutions, it is possible that it could also indirectly support the collaboration of Russian weapons laboratories. Additional communications links are planned for other nuclear and biological facilities in Russia. DOE officials told us that the benefits of the project clearly outweigh any negative implications of dual-use. Los Alamos National Laboratory is funding two projects in Chelyabinsk, a closed nuclear city, to improve the durability and performance of metal. The principal investigator said the technology could be used, for example, to enhance the performance of both military and civilian aircraft engines. He noted that he had not given the possibility much thought but believed that the United States could benefit from the technological improvements as much as Russia. According to DOE headquarters officials, the development of aircraft engine components clearly has dual-use implications. They point out that this work is highly developmental and represents one of the true nonproliferation success stories. Furthermore, they added, any Newly Independent State wanting to obtain this state-of-the-art engine technology could easily buy it. The Los Alamos IPP program director told us that nothing in the IPP program threatens U.S. national security interests because the United States and Russia are basically equal in terms of nuclear weapons development. Therefore, there are no advantages that Russia could gain from the technology of U.S. origin used in the IPP program. DOE’s director of Arms Control and Nonproliferation disagreed and told us the policy concerning U.S. technology related to the IPP program is clear. First and foremost, IPP projects are reviewed to ensure that they will “do no harm” to U.S. national security interests. He said that since he assumed his position in November 1997, all projects are being reviewed for any potential military applications. According to IPP program guidance, cooperative research in biological and chemical activities could be redirected to support a biological and/or chemical weapons program. The program’s guidelines call for coordination with the departments of State and Defense to ensure that IPP projects will not support another nation’s biological or chemical weapons knowledge base and that IPP funds are not provided to any NIS institute currently engaged in work on offensive biological or chemical weapons. Our review of 19 approved IPP chemical and biological projects (7 of which were part of our overall sample of projects), indicated that DOE’s review process may be inadequate. According to DOE officials, all chemical and biological IPP projects are subject to reviews by several agencies, including the Department of State, the Department of Defense’s Office of Cooperative Threat Reduction, the Department of the Army’s Soldiers and Biological Chemical Command (Aberdeen, Maryland), and the U.S. Army Medical Research Institute of Infectious Diseases (Fort Detrick, Maryland). However, for 19 projects that had been approved as of July 31, 1998, there was not always sufficient evidence in IPP project files to determine whether the proposed projects had been reviewed by all of the agencies. Furthermore, the criteria for reviewing the projects are vague. We found no evidence in the IPP program files to indicate that 7 of the 19 projects had been reviewed by DOE program offices. External project reviews also appeared to be inconsistent and/or were not well documented. For example, we found that, of the 19 project files, 13 contained evidence of the State Department’s review, none showed evidence of review by DOD’s Office of Cooperative Threat 15 showed no evidence of review by other agencies. DOE does not provide specific criteria for reviewing the proposed chemical and biological projects. Rather, DOE forwards the projects with a cover letter asking reviewers to indicate whether the project (1) raises no concerns, (2) raises some concerns that can be dealt with through close oversight by the national laboratory’s principal investigator, or (3) should not be done in its present form. Agency officials provided varying views on what criteria should be applied. Two officials said that projects should constitute “good science” but also noted that all proposed projects must be consistent with U.S. national security interests. The former special coordinator of DOD’s Office of Cooperative Threat Reduction told us that her office reviews projects to identify areas of research that could be of interest to DOD. Officials from one or more of the agencies that provide or coordinate technical reviews of the chemical and biological projects told us that they (1) do not always have sufficient information about the projects, (2) are uncertain whether they receive all of the proposed projects, (3) do not always thoroughly review the projects they receive, and (4) do not know the overall outcomes of the project reviews. Reviewers from some agencies told us that many of the proposals they review contain limited information, making adequate evaluation difficult. The official from the U.S. Army’s Medical Research Institute of Infectious Diseases, who is responsible for reviewing biological projects, said his review is informal and superficial. The review is intended primarily to (1) determine that the projects are not being duplicated by other U.S. government agencies and to (2) identify promising projects that might be more appropriately funded by other agencies. He assumed that the proposals received a more rigorous review at the IPP program office. An official from the Army’s Soldiers and Biological Chemical Command noted that IPP projects are also reviewed informally. The Command began reviewing IPP proposals in late 1997 and focuses on whether a project is based on good science. The official also said (1) it is uncertain whether the Command is seeing all of the projects, since it evaluates only project proposals forwarded by DOE, and (2) there is no well established mechanism to find out which projects are approved or rejected. The Command expected, however, that DOE would reject any proposals to which serious objections were raised. Officials from DOD’s Office of Cooperative Threat Reduction told us that the IPP review process is ad hoc and it is unclear how DOD’s review fits in with other U.S. government reviews. These officials were uncertain how many projects they had reviewed but thought it was only a few. We found that some reviewers had raised objections to projects. For example, the Soldiers and Biological Chemical Command raised concerns about two projects, one of which focused on the destruction of toxic material by means of ballistic missile rocket engines. DOD also objected to this project. Ultimately, the project was not approved, primarily because it lacked technical merit and commercial potential. National security considerations also entered into the disapproval. Additionally, the Command raised concerns about another project that dealt with cholesterol esterase activators. According to the Command’s evaluation, the proposed work could be approved, but there were concerns because it had the potential to provide information that could be applied to enhance the effects of nerve agents on the nervous system. According to an IPP program official, the project was further scrutinized and found to have only peaceful applications. The Command researcher who raised objections to the project was never informed of its final disposition. IPP program officials told us that despite what the documentation in the project files showed, project proposals were routinely being sent to the relevant federal agencies for review. IPP officials responsible for coordinating the reviews of the chemical and biological projects said they give reviewers a chance to provide input before decisions are made, but all agencies are not involved on a consistent basis. For example, IPP program officials were uncertain about the process for distributing project proposals and obtaining comments from DOD’s Office of Cooperative Threat Reduction. An IPP official told us that the State Department was responsible for disseminating the proposals to DOD through an interagency mechanism. A State Department official said this information was not correct. DOE does, however, rely on the State Department to facilitate other U.S. government agencies’ reviews of proposed IPP chemical and biological projects through the interagency mechanism. A State Department official said that this process, which has been in place for about a year, works well and that the results of the reviews are provided to DOE. According to program officials, as a result of our review, project proposals are now being sent directly to the Cooperative Threat Reduction office for review. In September 1998, the United States and Russia embarked on an ambitious effort, known as the Nuclear Cities Initiative, to expand commercial cooperation in Russia’s 10 nuclear cities. The two governments signed an agreement to facilitate the provision of new civilian jobs for workers in those locations. The Nuclear Cities Initiative will complement the IPP program in that its purpose is also to create jobs in the civilian sector for displaced weapons scientists. Whereas IPP is focused on four countries, the initiative will focus only on Russia’s 10 nuclear cities. Some IPP projects will furnish the initial assistance under the initiative, but the initiative is envisioned as a more ambitious commercialization effort for such cities than the IPP program or any other assistance program. DOE estimates that the Nuclear Cities Initiative may cost $600 million during the next 5 years, with the initial funding set at $15 to $20 million for fiscal year 1999. On December 10, 1998, DOE submitted a report to the Congress describing the objectives of the Nuclear Cities Initiative. U.S. embassy officials in Moscow have questioned large funding commitments to the nuclear cities at this time. According to these officials, promoting investment in nuclear cities has poor short-term prospects because of Russia’s current economic situation and the difficulties it poses to achieving commercial success in these isolated locations. The former Soviet Union concentrated most of its nuclear weapons program at 10 cities, shown in figure 4.1, that were so secret they did not appear on any publicly available maps until 1992. The 10 nuclear cities were among the most secret facilities in the former Soviet Union. Behind their walls, thousands of scientists and engineers labored on the design, assembly, and production of the Soviet nuclear arsenal. Today, the cities remain high-security areas, and access to them is limited. The 10 cities and their roles in developing nuclear weapons are shown in table 4.1. The IPP program has provided funds to various kinds of institutes with nuclear and other disciplines throughout Russia, including many in Moscow, St. Petersburg, and the nuclear cities. However, the Nuclear Cities Initiative will provide assistance only to Russia’s 10 nuclear cities. In addition, unlike the IPP program, the Nuclear Cities Initiative is based on a government- to-government agreement rather than on agreements between U.S. and Russian laboratories and institutes. The program is an outgrowth of a meeting between the Vice President of the United States and the Prime Minister of Russia at the Tenth Session of the United States-Russian Federation Commission for Economic and Technical Cooperation in March 1998. After additional meetings between high-ranking officials, the U.S. Secretary of Energy and Russia’s Minister of Atomic Energy signed an agreement on September 22, 1998. The purpose of the agreement is to facilitate the provision of new civilian jobs for Russian workers in the nuclear complex, which is controlled by the Ministry of the Russian Federation for Atomic Energy (MINATOM). Russian officials have identified a need to create 30,000 to 50,000 new jobs in these cities. According to DOE, the Nuclear Cities Initiative will create jobs faster than the IPP program. It will include the redirection of skills not only in the high-technology arena, as is being done in the IPP program, but also in the service, information, education, and small business sectors. Unlike the IPP program, the Nuclear Cities Initiative has a social component involving other federal agencies, such as the Agency for International Development and the Department of Commerce, to build good will in the scientific and general communities within these cities. The initiative will provide among other things, support systems for depression, women’s rights, language training, and job retraining. Furthermore, unlike the IPP program, which is driven by DOE’s national laboratories, DOE expects that the initiative will have working groups comprising not only scientists but also business and community leaders. DOE expects that the role of DOE’s national laboratories will be reduced as the initiative evolves. According to DOE, the Nuclear Cities Initiative will draw on the experience of the United States in restructuring the former nuclear weapons laboratories and production complexes. DOE will share the experience in restructuring that has occurred at U.S. nuclear sites such as Hanford, Washington and Oak Ridge, Tennessee, and will provide business training and support for development at nuclear cities and institutes in Russia affected by downsizing. The U.S. technical assistance will include training in business planning, methods to attract business to the area, and ways to get new businesses started. According to DOE’s report to the Congress on the program, the goals of the initiative are to assist the Russian Federation in reducing the size of its nuclear weapons establishment to correspond with its post-Cold War budget realities and smaller nuclear arsenal and promote nonproliferation goals by redirecting the work of nuclear weapons scientists, engineers, and technicians in the 10 Russian nuclear cities to alternative scientific or commercial activities. In its report to the Congress, DOE said the program serves U.S. national security objectives by assisting the Russian Federation in reducing its nuclear weapons establishment, which is still significantly larger than that of the United States; facilitating the transition of Russian scientists, engineers, technicians, and other specialists from weapons development or production to civilian work, thereby deterring the transmission of weapons knowledge to criminal elements, rogue states, or other undesirable customers; extending into the 10 nuclear cities U.S. efforts to assist Russian science in moving from weapons development to civilian uses; and helping to promote stability in Russia at a time when that country is undergoing extreme financial and political crisis. The program has other benefits, too, according to the DOE report, such as making the benefits of Russian science available to U.S. commercial enterprises, leveraging and developing existing success in bilateral and multilateral “brain drain” programs to advance Russia’s new goal of downsizing its nuclear weapons complex, and providing new understanding of the conditions in the nuclear cities. The agreement lists several cooperative activities. One such activity is developing entrepreneurial skills in employees displaced from enterprises of the nuclear complex, training them to write business plans, and facilitating the development of such plans. Other possible activities include facilitating the creation of conditions necessary for attracting investment in the nuclear cities to implement the projects within the framework of the agreement; the search for investors for production diversification projects, market analysis, and the marketing of products and services resulting from the implementation of those projects; and access to existing investment mechanisms, including investment funds. As a first step, DOE sent two working group missions, including members of the scientific, business, and financial communities, to Russia. DOE plans to send a third mission later this year. The initiative will start in three cities—(1) Sarov, formerly Arzamas-16, (2) Snezhinsk, formerly Chelyabinsk-70, and (3) Zheleznogorsk, formerly Krasnoyarsk-26,—and expand later. DOE’s report to the Congress said it is critical that projects be selected, reviewed, and launched expeditiously because of the financial crisis in Russia. The report also outlines the objectives of the Nuclear Cities Initiatives and provides milestones or goals for fiscal years 1999 and 2000. Program milestones for fiscal year 1999 include developing a strategic program plan, budgetary needs, methods to track program implementation, program guidance and management policies and procedures, program success measurements, workshops based on lessons learned from U.S. nuclear weapons downsizing and military base closure experiences, briefings for industry and nongovernmental organizations interested in the commercialization centers or high technology incubators to develop new a first year’s progress report on the program. In the second year, according to DOE’s report, DOE expects that the program will expand to additional cities. The director of the IPP program, who is also the director of the Nuclear Cities Initiative, said that the new program will not replace the IPP program’s efforts for several reasons. First, the IPP program will provide the initial projects for the Nuclear Cities Initiative. (See app. VI for a list of IPP projects scheduled to become part of the initiative.) Second, the IPP program will continue at other locations throughout the NIS, as well as the nuclear cities. Third, IPP projects will continue to give DOE lab personnel access to scientific institutes in the nuclear cities. By contrast, the Nuclear Cities Initiative is limited to a certain geographic region of each city and does not include the weapons institutes. According to the Director of the Nuclear Cities Initiative, the new initiative will provide access only to the municipal area, or civilian core, of the city, which may be surrounded by a fence. Beyond the perimeter of the municipal area are various secret nuclear institutes or technical areas that will remain off limits to U.S. personnel involved with the Nuclear Cities Initiative. According to the director, DOE is hoping that the initiative will provide new commercial opportunities in the city that will not necessarily have a scientific and research focus, as IPP projects do. The intent is that this new source of employment will serve individuals who are working or have worked in the weapons laboratories. Examples of projects proposed for the Nuclear Cities Initiative include a business copy center, a nonalcoholic brewery, a confectionery, automobile or pharmaceutical plant, a software development company, and a telecommunications project. DOE officials suggested that if commercial efforts are successful, not only will those employed in weapons manufacturing but also their relatives and friends will remain at the city and there will be less reason for weapons scientists, technicians, and engineers to leave the area. Also, according to the director, individuals working in the more secret technical areas may become involved with commercial enterprises in a municipal area by working in the municipal area part-time or eventually full-time. According to the director, the State Department is also considering including some ISTC projects in the Nuclear Cities Initiative. Other federal agencies, such as the Department of Defense or the Department of Commerce, may also provide assistance because the Nuclear Cities Initiative is considered more of an interagency effort than the IPP program. DOE will also coordinate with nongovernmental and commercial organizations. Since the initiative draws on the experience of the United States in restructuring its former nuclear weapons laboratories and production complexes, most of the federal funding will be appropriated to DOE. The DOE laboratories are expected to play a role in facilitating relationships, identifying projects, and helping bring projects to commercial fruition. While DOE expects to receive $15 million to $20 million for the initiative for fiscal year 1999, the director said that the total funding could reach up to $600 million in 5 years. In addition, DOE would like to receive funds from other sources, including U.S. industry and venture capitalists, but the program director said that the initiative may be a U.S. assistance program in the first years because of current economic conditions in Russia and its vast needs. Unlike the IPP program, the initiative is intended to be a shared program, as the Russian Federation has maintained from the outset. According to the DOE director, the Russians said at one point that they would provide a total of about $30 million. DOE officials recognize that such funding from Russia is uncertain because of that country’s current economic conditions. According to DOE officials, any Russian government assistance may be in the form of buildings, equipment, and other in-kind services. Also, the DOE director said that the Russians may consider revenue from the sale of highly enriched uranium to the United States as a possible source of funds for the Nuclear Cities Initiative. In October 1998, U.S. embassy officials in Moscow raised concerns about the challenges facing the Nuclear Cities Initiative, particularly in the context of Russia’s economic deterioration. With the devaluation of the ruble in August 1998 and the partial government default, developing a U.S. program to assist in commercializing the nuclear cities will require adjustment. U.S. officials said that the outlook for foreign investment, whether from Western companies or international financial institutions, is not favorable in the short and medium term. According to embassy officials, the initial concept of the initiative was to increase investment opportunities and promote technological commercialization in the nuclear cities. Three major components of the initiative are (1) training, (2) refocusing the existing IPP program, and (3) facilitating access for multilateral lending institutions and private capital markets. The officials said the strategy was on target in mid-1998, but with the changes in the economic and political landscape, “the reality is that a program based primarily on promoting investment in Russia’s closed cities has very poor short-term prospects and needs a bridging strategy until the situation improves.” According to these officials, one important element in planning the initiative has been the assumption that Russian banks would support projects by providing small to medium-sized loans. However, the entire Russian banking system has collapsed, and there is no indication the situation will return to normal in the short term. The ability of Russian banks to support job creation in the nuclear cities by creating lending opportunities and investing has thus been severely curtailed. A number of banks are in financial difficulty and will likely not survive without a government bailout. U.S. officials have cautioned that “care should be taken in transferring funds to any project in Russia lest the money be swallowed up in a bankrupt financial institution.” U.S. officials also referred to problems with the Russian tax structure. “Tax and customs problems have been especially detrimental to U.S. assistance programs and could be another casualty of Russia’s dysfunctional tax structure” if the Russian government does not make improvements. Another concern is limited access to the nuclear cities. Without sufficient access, accountability, and transparency, there is a danger that the assistance will never go to the targeted areas. Access problems may continue because Russia’s Federal Security Bureau may view this program as an intelligence-gathering effort. Officials from Sarov’s All-Russian Scientific Research Institute of Experimental Physics told us that the Nuclear Cities Initiative can help, but it will be difficult to attract commercial partners to a city located behind a fence. The city has been isolated for over 40 years and it is not practical to think that conditions can be changed overnight; transition must occur on a step-by-step basis. Still another challenge to implementing the initiative is the limit on intellectual property rights accorded to Russian researchers, according to DOE officials. As the IPP program is structured, the United States has worldwide intellectual property rights except in the NIS; however, the Russian collaborators may find their intellectual property rights to be of dubious value in a country that does not have the entrepreneurial capital to commercialize their ideas. Therefore, if the Russian intellectual property rights under the Nuclear Cities Initiative are also limited to the NIS, they may not be considered very valuable. According to U.S. embassy officials, the banking issues, the poor prospects for foreign investment, the taxes on U.S. assistance, the potential restrictions on access to the nuclear cities, and concerns about intellectual property rights are some of the reasons that the program should be redirected in the short term from promoting investment to establishing the building blocks to attract financial resources when the Russian economy stabilizes. They recommended that more immediate aid could include working with Russians on developing business plans, providing leadership training, and working with local and regional governments to improve the business environment. DOE’s effort to supplement the salaries of former weapons scientists so that they do not sell their services to terrorists, criminal organizations, or countries of proliferation concern is laudable and, we believe, in our national security interests. However, we have concerns about the implementation and oversight of the IPP program. The program appears to be at a crossroads, requiring DOE to determine whether it will simply provide short-term financial assistance or will serve the longer-term nonproliferation goal of directing former weapons scientists into sustainable commercial activities. The program’s long-term goal presents a much more difficult challenge than providing short-term assistance. Furthermore, given the economic situation in Russia, this goal may never be realized for the majority of IPP projects. As we noted earlier, over 80 percent of IPP projects are still in the Thrust 1 stage. While the program has needed—and benefited from—the support provided by DOE’s national laboratories, we believe that it is time to reassess the laboratories’ future role, particularly if the focus of the program is to commercialize projects and thereby provide for the long-term employment of NIS weapons scientists. While the national laboratories possess technical skills and have made great strides in helping to “open up” NIS institutes, they have, by their own admission, limited expertise in commercial market activities. In addition, the high proportion of funding—about 63 percent—going to the U.S. national laboratories and to support U.S. industry’s participation in the program—does not seem consistent with the program’s goal of supplementing the salaries of NIS former weapons scientists. The IPP program has established hundreds of projects at many institutes throughout the NIS. It is uncertain, however, to what extent IPP funds have focused on the most critical scientific institutes and targeted the most important weapons scientists. Our review showed that the national laboratory officials who monitor the projects were frequently uncertain about the number of weapons scientists employed and their background. In fact, some of the institutes we visited did not work on weapons of mass destruction or have any clear defense orientation. We believe that program officials could conduct a more thorough review of these institutes to better ensure that program funds are being focused on the most important facilities and personnel. In addition, more careful monitoring of funds disbursed to Russian and other NIS institutes would ensure greater accountability for these funds. Furthermore, IPP’s program guidance is unclear as to whether assistance should focus on previously employed weapons scientists and/or scientists currently working on weapons programs. As a result, U.S. funds are supplementing the salaries of scientists working on Russia’s weapons of mass destruction programs. Ensuring that IPP projects are consistent with U.S. national security interests is essential to safeguarding sensitive technologies. Some of the projects related to weapons, particularly the chemical and biological projects, could have dual-use implications. Although the projects were reviewed by U.S. government officials, the emphasis of their reviews appeared to be to ensure that they were “good science.” Furthermore, some IPP chemical and biological projects were apparently given cursory reviews by some key reviewing officials. More rigorous and systematic reviews of all IPP projects would provide greater assurance that U.S. national security concerns are being carefully considered. The IPP program has not demonstrated significant progress toward its longer-term nonproliferation goal of directing NIS weapons scientists from defense work to self-sustaining commercial employment. This goal would be difficult to achieve under any circumstances but is made more difficult by the deteriorating economic conditions in Russia. The program has evolved into a longer-term effort than was initially envisioned, and it is unclear when the program is scheduled to end. While DOE has claimed from the outset that the program has an exit strategy, or end point, it is unclear how that strategy is being implemented. DOE officials provided differing time frames for phasing out the program, and measures of the program’s success are lacking. Given the unique nature of the program, a strategic plan is needed that, to the extent possible, links its goals, costs, performance measures, and time frames. Program officials told us that they are finalizing such a plan. Successfully implementing the Nuclear Cities Initiative, a major economic development effort, is a daunting challenge considering the dire economic conditions in Russia, including the all but complete collapse of its banking system. The 10 nuclear cities are in remote locations and access to them is restricted. Attracting investors to these locations and finding customers to purchase whatever products or services are produced will prove to be major challenges. Given these problems and the limited commercial success evidenced in the IPP program, we believe that the Nuclear Cities Initiative is likely to be a subsidy program for many years, rather than a stimulus for economic development. In addition, we question whether DOE possesses the expertise needed to develop market-based economies in a formerly closed society. At a minimum, DOE will have to work in partnership with other federal and international economic development agencies and private industry. Furthermore, DOE’s initial estimate of the program’s costs—$600 million over 5 years—may be just a down payment on a financially larger and longer-term program. To maximize the impact of the Initiatives for Proliferation Prevention program’s funding and improve DOE’s oversight of the program, we recommend that the Secretary of Energy reexamine the role and costs of the national laboratories’ involvement with a view toward maximizing the amount of program funds going to the NIS institutes; obtain information on how program funds are being spent by the NIS seek assurances from the Russian government, either through a government-to-government agreement or through other means, that program funds are exempt from Russian taxes; require that program officials, to the extent possible, obtain accurate data on the number and background of the scientists participating in program projects and eliminate funding for institutes that did not formerly work on weapons of mass destruction; clarify program guidance as to whether scientists currently employed in weapons of mass destruction programs are eligible for program funding; require that project reviewers consider all military applications of projects to ensure that useful defense-related information is not unintentionally transferred; strengthen and formalize DOE’s process for reviewing proposed chemical and biological projects by (1) providing complete project information to all reviewing U.S. government agencies and organizations, (2) developing criteria to help frame the evaluation process, and (3) providing feedback to all of the reviewing agencies about the final disposition of the projects. In addition, given that one of the purposes of the program is to sustain the employment of weapons scientists through projects that can be commercialized, we recommend that the Secretary reevaluate the large number of Thrust 1 projects, particularly those that have been funded for several years, and eliminate those that do not have commercial potential and develop criteria and time frames for determining when Thrust 1 projects should be terminated if they do not meet the criteria for graduation to the program’s next phase. Because DOE plans to implement the Nuclear Cities Initiative in a relatively short amount of time (5 years) at a cost of about $600 million during uncertain economic times in Russia, we believe it is critical that the program’s implementation be based on solid thinking and planning that considers the problems experienced under the IPP program. Therefore, we recommend that the Secretary develop a strategic plan for the initiative before large-scale funding begins and include goals, costs, time frames, performance measures, and expected outcomes, such as the number of jobs to be created for each city; and not expand the initiative beyond the three nuclear cities until DOE has demonstrated that its efforts are achieving the program’s objectives, that is, that jobs are being created in the civilian sector for displaced weapons scientists, engineers, and technicians. The Department of Energy, in commenting on a draft of this report, concurred with the report’s findings and recommendations and said that our evaluation will assist the Department in significantly strengthening the program. The Department provided clarifying comments on three issues raised in the report, including (1) the dual-use potential of some projects, (2) the provision of program funding to Russian weapons scientists currently working on their own nuclear weapons programs, and (3) the lack of progress in commercializing program projects. The Department agreed with our recommendations on these issues, and its comments are presented in appendix VII. The Department also provided technical comments that were incorporated into the report as appropriate. Regarding the Initiatives for Proliferation Prevention program, the Department stated that, among other actions responding to our recommendations, it will (1) examine the role of the national laboratories, (2) work with the State Department to develop an agreement with Russia to exempt program funds from Russian taxes, (3) instruct program officials to obtain data on the number and background of Newly Independent State scientists in the program, and (4) reevaluate the large number of projects to eliminate those without commercial potential. Regarding our recommendations related to the Nuclear Cities Initiative, the Department said that it will publish a strategic plan within 90 days. The Department also concurred with our recommendation that it not expand the initiative beyond the first three nuclear cities until the initiative demonstrates that jobs are being created in the civilian sector for unemployed weapons scientists. However, the Department stated that it did not want to preclude the possibility of reducing weapons-related activities through the initiative in another nuclear city if the opportunity arises.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) efforts to create jobs for displaced former Soviet Union scientists through its Initiatives for Proliferation Prevention program, focusing on: (1) the costs to implement the program for fiscal years 1994-98, including the amount of funds received by weapons scientists and institutes; (2) the extent to which the program's projects are meeting their nonproliferation and commercialization objectives; and (3) DOE's Nuclear Cities Initiative. GAO noted that: (1) the cost to implement the Initiatives for Proliferation Prevention program from fiscal year 1994 through June 1998 are as follows: (a) of the $63.5 million spent, $23.7 million, or 37 percent, went to scientific institutes in the Newly Independent States (NIS); (b) the amount of money that reached the scientists at the institutes is unknown because the institutes' overhead charges, taxes, and other fees reduced the amount of money available to pay the scientists; and (c) about 63 percent, or $39.8 million, of the program's funds was spent in the United States, mostly by DOE's national laboratories in implementing and providing oversight of the program; (2) regarding the extent to which the program is meeting its nonproliferation and commercialization goals, GAO found that: (a) the program has been successful in employing weapons scientists through research and development projects, but it has not achieved its broader nonproliferation goal of long-term employment through the commercialization of these projects; (b) program officials do not always know how many scientists are receiving program funding or whether the key scientists and institutes are being targeted; (c) some scientists currently working on Russia's weapons of mass destruction program are receiving program funds; (d) some dual-use projects may have unintentionally provided defense-related information--an outcome that could negatively affect U.S. national security interests; and (e) chemical and biological projects may not be adequately reviewed by U.S. officials prior to approval; and (3) the Nuclear Cities Initiative may cost $600 million over the next 5 years: (a) the initiative is still largely in a conceptual phase, and it is uncertain how jobs will be created in the 10 nuclear cities because of restricted access and the current financial crisis in Russia; and (b) the initiative is likely to be a subsidy program for Russia for many years, given the lack of commercial success in the Initiatives for Proliferation Prevention Program.
The best method known to reduce breast cancer mortality is early detection. Detection of breast cancer is accomplished through a combination of self-examination, physical examination by a physician, and mammography. Of these methods, mammography is the single most effective tool for detection of early-stage breast cancer. The use of mammography as a tool for detecting early or potential breast cancer continues to increase. The proportion of women aged 50 and older who had received mammograms in the previous year increased from 26 percent in 1987 to 54 percent in 1993, according to the Centers for Disease Control and Prevention. Since 1992, at least 23 million mammograms have been performed in the United States annually. The consequences of substandard mammograms can be very serious. If the image shows an abnormality when none exists, a woman may go through unnecessary and costly follow-up procedures, such as ultrasound or biopsies. If the image is too poor to show an abnormality that is actually present, a woman may lose the chance to stop the cancer’s spread. To help ensure the quality of images and their interpretation, MQSA required FDA to implement both an accreditation and an inspection process for mammography facilities. For the accreditation process, FDA established standards that included requirements for personnel qualifications, equipment performance, and quality assurance recordkeeping. These standards were based on those used by the American College of Radiology (ACR), a private, nonprofit professional association of radiologists, and have been endorsed by industry and government experts. As of July 1996, almost 10,000 facilities had been accredited and had received an FDA certificate to that effect. MQSA inspection authority provides FDA with another means to ensure that facilities comply with standards on a day-to-day operating basis. While for the vast majority of facilities accreditation application and review are accomplished through the mail, all inspections are conducted on site. During an inspection, MQSA inspectors conduct various equipment tests and review the facility’s records on personnel qualifications, quality controls, and quality assurance as well as mammography reports. FDA, which has contracted with virtually all states and territories to conduct inspections, began its first annual inspections of the nation’s mammography facilities in January 1995. It established an extensive program for training inspectors, and as of April 1996, about 220 state and FDA personnel had become certified to perform MQSA inspections. The majority of the personnel chosen to become MQSA inspectors had 5 or more years of prior experience in radiological health. FDA uses its own inspectors to conduct follow-up inspections, monitor the performance of state inspectors, and conduct inspections in states that either did not contract with FDA or lacked enough FDA-certified inspectors to do all the inspections. FDA’s field offices are responsible for following up on inspection violations and enforcing facility compliance. For the most serious violations, FDA’s field offices issue a warning letter informing the facility of the seriousness of the violation. The facility must begin correcting its problem immediately and report the corrective action taken in writing to FDA within 15 work days of receipt of the letter. In some cases, FDA conducts a follow-up inspection of the facility to ensure that the problem is corrected. If the facility fails to correct a problem, FDA can take other enforcement actions, such as imposing a Directed Plan of Correction; assessing a civil penalty of up to $10,000 per day or per failure; or suspending or revoking a facility’s FDA certificate, which prevents a facility from operating lawfully. First-year inspections of mammography facilities showed that a significant number of facilities were not in full compliance with mammography standards. So far, second-year inspections have shown a considerable reduction in the proportion of facilities cited for violations—an indication that the inspection process is having positive results. However, inspection results vary considerably from state to state. It is not clear how much these differences reflect actual differences in the levels of quality in mammography facilities and how much they reflect varying approaches to conducting inspections and reporting the results. To gain a true picture of the full effect of the inspection process, more consistent reporting of violations is needed. FDA’s automated inspection database contained first-year inspection results for 9,186 facilities as of June 20, 1996. Of these, 6,177 showed one or more violations of the standards. As table 1 shows, 1,849 facilities (or 20 percent) had violations that were serious enough to require the facility to provide FDA with a formal response as to the corrective actions taken. Of these, 214 facilities had violations that ranked in the most serious (or “level 1”) category, requiring FDA to send the facility a warning letter. The most serious violations found in these inspections were mainly personnel related: 88 percent of the level 1 violations were for personnel who did not fully meet FDA’s qualification standards (see app. I for a further breakdown of the types of level 1 violations). Level 2 violations involved a greater mix of personnel-related and equipment-related problems, and the majority of level 3 violations involved missing or incomplete quality assurance records and test results as well as medical physicist survey problems. By June 20, 1996, FDA’s database contained the results of 1,503 second-year inspections. We compared the results of first-year and second-year inspections for these 1,503 facilities and found a substantial decrease in all three categories in the proportion of facilities cited for violations (see fig. 1). Another measure of facilities’ improvement in compliance is the extent of repeat violations, that is, violations identified in the first year’s inspection that are identified again when the facility is reinspected the following year. Facilities had a better record in not repeating the more severe violations than they did with minor findings. More specifically, our analysis of the 1,503 facilities showed the following: None of the 50 facilities whose highest level of violation was at the level 1 category during the first-year inspection repeated one or more of the same violations in the second inspection. Six percent of the 345 facilities whose highest level of violation was at the level 2 category during the first-year inspection repeated one or more of the same violations in the second inspection. Twelve percent of the 669 facilities whose highest level of violation was at the level 3 category during the first-year inspection repeated one or more of the same violations in the second inspection. Our analysis of inspection results showed considerable state-by-state variation in the degree to which facilities were cited for violations of MQSA standards. For example,14 states cited no facilities for level 1 violations, while 6 states cited 5 to 12 percent of the facilities inspected for level 1 violations (see app. II for state-by-state results). We were unable to determine the reason for these differences. It may be, for example, that facilities in low-violation states really were much better at complying with standards than facilities in high-violation states. Alternatively, the differences may have been related to variations in the way inspectors conducted their inspections. In the eight states in which we observed inspections, we saw several differences in inspection practices that affected the number of violations reported. The two main differences follow. First, inspectors’ adherence to time limits for resolving problems of missing documents was inconsistent. FDA’s current procedures allow inspectors to delay submitting their inspection reports for 5 to 30 days in order to resolve problems of missing documents. This delay is intended to avoid citing facilities for not having certain records available on site. For example, when a facility claims that its personnel meet MQSA qualification requirements but does not have the required documentation at hand, FDA guidelines instruct inspectors to either delay the transmission of the inspection report or note the “claimed items” in the inspection record. These open items are to be resolved within 30 days, at which time the inspection report is to be finalized. However, we found hundreds of cases in the inspection report database that contained open items longer than 30 days—many for over 6 months. Several inspectors we interviewed said they were not aware of the 30-day limit for resolving pending items. On the other hand, inspectors in two states we visited said they would not wait more than 5 days under any circumstances before submitting a report that a facility was in violation. Thus, a facility in one state might be reported as being in violation, while a facility with the same problem in another state would not. These differences may have resulted in inconsistent reporting of violations; moreover, these inconsistencies make it difficult to determine the full effect of the inspection process. Second, while FDA’s policy is to cite facilities for all violations even if problems are corrected on the spot, we found that inspectors do not always adhere to this policy. For example, we observed that an inspector did not cite a facility that failed its darkroom fog test—normally a level 2 violation—because the facility immediately corrected the problem. Further, FDA’s procedures instruct inspectors to note on-the-spot corrections in the “remarks” section of the inspection software. We observed two inspections on site that involved on-the-spot corrections, but we did not see these inspectors documenting them in the remarks section. We do not question the merit of giving inspectors time to resolve such problems as missing documents or giving facilities opportunities to correct their problems immediately. However, not documenting violations consistently creates problems in forming an accurate picture of what the inspection process is accomplishing. FDA officials told us that they had begun a program in February 1996 to review inspector performance and that, as of October 31, 1996, 65 percent of all inspectors had been audited. FDA officials expect that, when fully implemented, the audit program will help ensure that policies are consistently applied. We agree that the audit program will help identify some inconsistent inspection practices; however, we believe the inspection results should also be monitored to ensure that open items are resolved in a timely manner and that on-the-spot corrections are identified. Although many factors can affect the quality of mammography images, one key factor is the condition of mammography equipment. We identified a need for FDA to clarify the procedures it requires for a major equipment test that evaluates image quality and to follow up when test results suggest problems with the quality of the images being produced. One of the most important aspects of the inspection process is testing mammography equipment by evaluating what is called a “phantom image.” In this procedure, the inspector uses the facility’s mammography equipment to take an X-ray image of a plastic block containing 16 test objects. This block is X-rayed as though it were a breast to determine how many of the test objects can be seen on the image. The inspector evaluates aspects of the performance of the facility’s imaging system by scoring the number of objects that can be seen. We found two questions that need to be answered with regard to evaluating phantom images. What is the impact of inconsistent phantom image scoring? FDA’s current inspection procedures instruct inspectors to score the phantom images under viewing conditions at the facilities. However, differences in inspectors’ experience and in facilities’ viewing conditions may influence the phantom image scores. For greater uniformity in scoring the images, two states we visited go beyond FDA’s standards by having their inspectors score phantom images using standardized viewing conditions (that is, away from the facility), having two or more persons read the images to ensure more consistent scoring, or both. FDA officials told us that the impact of these variations in procedure on the accuracy of image evaluation is unknown and that they are studying the problem. How should large image receptors be evaluated? FDA procedures currently require that phantom images be checked using the receptor that is more commonly used by the facility. Since facilities use small image receptors for most mammograms, these receptors are typically tested during an inspection. Although facilities may use large image receptors for some women, FDA does not require that the large image receptor be tested and does not have specific criteria for evaluating the phantom images of the large receptor. Inconsistent phantom image scoring and lack of standards for evaluating large image receptors can affect inspection results, as can be seen in the example of a 1995 inspection of a large mobile mammography facility headquartered in North Carolina and operating in five states. The facility is reported to perform over 20,000 mammograms a year. A state inspector cited the facility for multiple problems based on the viewing conditions at the facility and images from the small receptor. Although it was not required by FDA, the inspector also evaluated the phantom images from the large image receptor and noted in the remarks section of the inspection report that, for three of four mammography units, these images did not pass the review. An FDA inspector conducted a follow-up inspection, also using the viewing conditions at the facility and images from both the small and large image receptors. This inspector cited the facility for many violations related to both the small and large image receptors. Finally, four reviewers at FDA headquarters examined these same images away from the facility and together found fewer violations related to both the small and large image receptors than the state inspector and the FDA inspector had found. The reviewers, however, did confirm the serious violations related to the large image receptor that were found by the state inspector and the FDA inspector. Although this facility was cited for serious violations related to the large image receptor as a result of the follow-up inspection, FDA officials told us that, because of the lack of inspection criteria, imposing strong sanctions on the basis of phantom image failures from the large receptor could prove problematic. According to FDA, standards for testing the large receptor have not yet been developed because the technical issues relating to the receptor have not yet been resolved by the scientific and medical community. We discussed this case with senior FDA officials, who said that they plan to both provide additional training and guidance to minimize the variability in phantom image scoring and study the development of standards for evaluating images from the large image receptor. Another issue raised by the inspections of the facility discussed above is how to proceed if the phantom image test suggests serious problems with image quality. FDA views phantom image failures as early indications of potential problems deserving further investigation. FDA’s procedures allow facilities with serious phantom image failures to continue performing mammograms while FDA investigates and the facility corrects problems. During the course of our work, we heard varying opinions on the risk of allowing facilities with serious phantom image failures to continue doing mammograms. Some people we spoke with believe the risk of patients’ getting poor mammograms from facilities with serious phantom image failures is high enough that the facilities should not be allowed to do any mammograms until their problems are corrected and those corrections are verified by a reinspection. Several states, including California, Illinois, and Michigan, have rules empowering inspectors to immediately stop facilities with level 1 phantom image failures from doing additional mammograms. However, others (including FDA officials in charge of the MQSA program) do not believe that such drastic action should be taken on the basis of phantom image test results alone. They assert that phantom image failures are an indicator of possible image system problems but are not conclusive evidence that actual mammograms are faulty. At the time of our review, FDA did not have a follow-up system in place for reviewing the actual mammograms (called “clinical images”) of facilities with serious phantom image violations to ensure that they were not producing poor mammograms. However, in the case of the mobile facility discussed earlier, FDA did ask ACR to conduct two reviews of the clinical images produced by the facility because of image quality concerns. The more comprehensive review was conducted in July 1996, subsequent to our inquiry about FDA’s handling of the case. This review selected a total of 28 sets of images from five units operated by the facility for three different time frames over a 1-year period. In early September 1996, ACR completed the review and found most of these clinical images of unacceptable quality. On the basis of these results, FDA obtained the facility’s agreement to discontinue performing mammography until its radiologic technologists and its radiologist obtained additional training approved by FDA and ACR, which they did the following week. In addition, at FDA’s request, ACR is planning to review another sample of clinical images produced by the facility to determine to what extent patients should be notified of past quality problems at the facility. This case clearly demonstrates the need for a procedure to review clinical images when there is sufficient evidence to suggest problems with the quality of a facility’s mammograms. Without the criteria and process in place for determining when and how follow-up review of clinical images should be conducted and patient notification should be carried out, there is no assurance that patients are protected from the risk of receiving poor mammograms. FDA officials agreed that there is a need to incorporate a follow-up clinical image review process. In its proposed final regulation dated April 3, 1996, FDA has included a provision that specifically provides FDA with authority to require clinical image review and patient notification if FDA finds that image quality at a facility has been severely compromised. Although FDA has made progress in bringing facilities into compliance with mammography standards, it lacks procedures to enforce timely correction of all deficiencies found during inspections. One major problem involves the need to develop criteria for defining conditions constituting a serious risk to human health and determining when severe sanctions are warranted. Other problems that also merit attention relate to determining whether a stronger approach is needed to resolve repeated level 3 violations and establishing an effective information system for follow-up on inspection results. FDA is developing such an information system. MQSA provides FDA a broad range of sanctions to impose against noncomplying facilities, but it emphasizes bringing facilities into compliance through those sanctions that are less severe, such as imposing a Directed Plan of Correction. FDA has the authority to impose stronger sanctions, such as an immediate suspension of a facility’s FDA certificate, if it determines that the facility’s operation presents a serious risk to human health. Since the implementation of MQSA, FDA has never done so. We found evidence that FDA needs to define those circumstances in which such actions are warranted. In dealing with the continuing problems at the mobile facility discussed earlier, there was considerable internal debate at FDA about the level of action that should be taken. Inspections of the facility beginning in June 1995 had disclosed serious violations. (See app. III for a chronology of key events surrounding the resolution of quality assurance problems at the facility.) Several state and FDA field personnel involved in the case told us they thought the severity of violations warranted an immediate suspension of the facility’s certificate and had made such a recommendation. FDA officials decided against suspending the facility’s certificate because they thought the evidence of health risk was not clear and compelling enough to do so. In September 1996, when ACR’s review of clinical images eventually confirmed that the quality of the mammograms was unacceptable, FDA obtained the facility’s agreement to discontinue performing mammography until facility personnel received more training. Because of the agreement, FDA did not have to go through the process of imposing an immediate suspension of the facility’s certificate. Nevertheless, this incident points to the need for having criteria in place to impose such a sanction to protect patients, if necessary, from continuing to receive poor mammograms. We believe—and FDA officials agreed—that timely imposition of an appropriate sanction is in part dependent on (1) criteria for when conditions constitute a serious risk to human health, justifying immediate suspension of operations, and (2) a process for discontinuing mammography services until the problems are corrected. Another matter that also merits attention from FDA is whether more serious follow-up is needed for facilities with multiple or repeated level 3 violations. Current policy for facilities whose most serious violations are in the level 3 category requires no reporting on the facility’s part and no follow-up on the part of FDA until the next year’s inspection. However, of the facilities that had gone through two inspections, 18 percent of those whose most serious violation was in the level 3 category during the first year had five or more such violations, and 12 percent repeated one or more of the same violations in the next year. Several state inspectors we interviewed expressed concern that current procedures do not call for stronger action against such facilities. Inspectors from one state told us that their state regulations allow them to impose more serious penalties for recurring level 3 violations. Some inspectors also told us that even though level 3 violations were generally considered less serious, some level 3 violations—such as a facility’s failure to take corrective action when called for in the medical physicist’s survey report—are serious enough that they should be corrected as soon as possible to maintain quality assurance. We did not evaluate the appropriateness of FDA’s classification of the various levels of violations. Because of the concerns expressed by the inspectors and the extent of multiple and repeat violations noted above, however, we believe that FDA should evaluate its classification of level 3 violations and the enforcement actions taken on them. If FDA believes these violations are important and need to be corrected, it could raise the violation level for facilities with multiple or repeated violations, which would ensure formal follow-up. However, if FDA views some of these violations as insignificant or having little effect on mammography, it may choose not to classify them as violations. FDA generally delegates inspection responsibility through contracts with states but remains responsible for follow-up and enforcement when violations are reported. For level 1 violations, FDA’s field offices are responsible for validating inspection results and issuing a warning letter that requires the facility to respond within 15 work days. For level 2 violations, no warning letters are sent, but facilities are required to respond in writing within 30 work days of the receipt of an inspection report. Since June 1995, FDA has been working with contractors to develop an automated compliance system that would supply its field offices with computer-based information to manage this compliance effort. Development problems have delayed the system, which is now projected to be operational early in 1997. In the meantime, FDA has been relying on field offices to maintain their own tracking systems. Our reviews at three of FDA’s field offices showed that these interim systems were inadequate. Staff responsible for compliance follow-up had no direct access to inspection databases and were relying either on the state inspectors or on FDA headquarters to send them copies of inspection reports showing level 1 and level 2 violations that needed to be tracked. Staff said that sometimes they did not receive reports from headquarters until 2 to 3 months after the inspections and that state inspectors did not always send reports on level 2 cases. As a result, field office staff often received facility responses on corrective actions taken for level 2 violations before they even knew that violations had been cited. None of the three offices maintained case logs or prepared any status reports on their tracking efforts or the timeliness of facility responses. Problems in these makeshift systems have stymied our attempts to determine how quickly and completely violations were being corrected. To determine whether field offices were sending out warning letters in a timely manner and whether facilities were correcting their deficiencies within required time frames, at our request, FDA headquarters in April 1996 sent all of its field offices a list of all level 1 and level 2 violations cited in their jurisdictions and asked them to compile data on facility response times for corrective actions. Field offices had difficulty responding with complete information. FDA headquarters had initially told us that these data would be available in early June, but at the time that we completed our field work, discrepancies still remained unresolved. We conducted an on-site file review at one FDA field office in August and September 1996 and found that the office had incomplete documentation for 13 of the 40 cases with level 2 violations cited between July 1, 1995, and June 20, 1996. In one case, documentation was absent altogether. We also found problems with the timeliness of follow-up on level 1 violations. For example, while FDA guidelines require a field office to issue a warning letter for a level 1 violation within 15 to 30 business days after the inspection, the office we reviewed took up to 132 business days. Also, although FDA procedures require a facility to respond within 15 business days of receiving the warning letter, in two of the eight level 1 cases that we reviewed the facilities did not respond within the required time frame, and one case file contained no record of a facility response. These findings highlight the importance of completing and implementing the automated compliance system as soon as possible. Until field offices have ready access to up-to-date information, it will be difficult for them to conduct effective follow-up and enforcement for facilities that violate the standards. The results of the current inspection program of mammography facilities appear to be generally positive. Establishing this comprehensive inspection program has been a substantial effort on FDA’s part and, as mammography facilities move into their second year of inspections, violations of mammography standards are declining. Despite these encouraging results, at the time of our review, we found indications that certain aspects of the inspection program needed attention. First, to ensure an accurate picture of how many problems were found and how well the inspection program was working, violations would need to be more consistently recorded. In addition, even though serious violations do not occur often, when they do, they have the potential for posing a serious health risk to those women affected. To ensure high quality mammography, FDA must be vigilant in its efforts to confirm that facilities promptly and adequately correct violations. As a result, FDA would need to provide an expeditious means to follow up, including notifying patients, when serious problems affecting image quality were indicated. Finally, improvements would be needed in systems and procedures for monitoring facilities with violations and for ensuring that they corrected deficiencies. We recommend that FDA take action in the following areas: Strengthening the inspection reporting process. To better reflect the extent to which inspections detect compliance problems, FDA needs to monitor its inspection results more closely to ensure that its procedures for resolving open items and documenting on-the-spot corrections are consistently followed. Strengthening procedures for assessing image quality and protecting patients. To minimize the variability in how phantom images are scored, additional training and guidance should be provided, including guidance for evaluating phantom images using the large image receptor. Also, to minimize patients’ risk of poor quality mammograms, the final implementing regulations should include the criteria and process for requiring follow-up clinical image reviews and, when necessary, patient notification when inspections detect violations, such as serious phantom image failures, that could severely compromise image quality. Ensuring that violations are corrected in a timely manner. Several steps are needed here. First, to help ensure that appropriate action is taken when serious problems are discovered, procedures need to be developed for (1) determining when the health risk is serious enough to justify immediate suspension of certification and (2) implementing the suspension. Second, to help ensure better performance from facilities that exhibit lingering, though less serious, deficiencies, the classification and enforcement policy on level 3 violations needs reevaluation to determine if additional follow-up is needed on facilities with multiple and repeated level 3 violations. Third, so that compliance personnel can have access to complete, up-to-date information on violations reported, all necessary steps need to be taken to ensure that the compliance tracking system currently under development is completed as soon as possible. In commenting on a draft of this report, FDA generally agreed with our recommendations and cited specific program enhancements and corrective actions it had recently undertaken. FDA was, however, critical of our draft on several accounts. FDA said that the scope of our work did not address some aspects of MQSA requirements and that the draft did not adequately reflect many of FDA’s accomplishments in implementing MQSA. Moreover, FDA believed the report did not recognize changes FDA had made to improve those aspects of the inspection program that we had found in need of attention. FDA cited recent actions it had taken, including (1) establishing procedures and guidance for clinical image reviews, sanctions for failure to comply with standards, and procedures for follow-up on repeated level 3 violations; (2) implementing an inspector audit program that had evaluated 65 percent of inspectors as of October 31, 1996; and (3) making a commitment to fully implement its automated compliance tracking system in January 1997. FDA expressed concern that not acknowledging these actions would create an inaccurate impression that the program was fraught with problems, which could undermine the public confidence in mammography. Concerning the scope of our work, this report is not intended as a vehicle for commenting on implementation of MQSA as a whole; it deals only with FDA’s inspection program. However, we think that the report speaks both to FDA’s accomplishments related to the inspection program and to those problems that we found—and that FDA has now moved to correct. The main reason that FDA’s recent actions were not reflected in the original draft was that they occurred about the same time or, in most cases, after we had provided FDA the draft for comment. We generally consider FDA’s subsequent actions and approaches to our recommendations to be responsive and believe that, if properly implemented, they should strengthen the inspection program. We recognize FDA’s concern about the importance of promoting public confidence in mammography, and, in fact, our recommendations to promote timely compliance with MQSA were made with that objective in mind. While we generally concur with FDA’s approaches for addressing our recommendations, we continue to believe that opportunity exists for FDA to improve its reporting process. We recognize that FDA has acted to implement the inspector audit program, but we believe that FDA still needs to monitor its inspection results to ensure timely follow-up on “open items” and accurate reporting of on-the-spot corrections. As a result, we have clarified our recommendation on strengthening the inspection reporting process accordingly. FDA also provided technical comments, which we considered and incorporated where appropriate, and cited several other areas of the report that it thought needed clarification. The full text of FDA’s comments, accompanied by our response, is contained in appendix IV. We also received comments from the North Carolina facility that we cited in the report. The facility stated that our report addressed many of its concerns with the MQSA program. It also commented that its case demonstrates the need for an organized approach to evaluation and for all involved agencies to agree upon an appropriate standard for clinical image evaluation. The facility asserted that FDA’s process lacks these critical elements and that the facility was being held to unreasonable standards. As a result, in October 1996, the facility appealed its Directed Plan of Correction to FDA. We have updated the chronology of FDA’s enforcement actions regarding the facility to reflect the facility’s appeal and the subsequent denial of the appeal by FDA (see app. III). We are sending copies of this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-7119 if you or your staff have any questions. Major contributors to this report are listed in appendix V. Personnel do not meet FDA’s qualification standards Processor quality control charts not available No survey conducted by medical physicist Mammography records improperly maintained or recorded Self-referred system inadequate or not in place Less than one-half of 1 percent. Percentage of facilities with no violations (continued) Initial inspection revealed 1 level 1 violation for phantom image failure, 2 level 2 violations for phantom image failure, 1 level 2 violation for processor quality control problems, and 31 level 3 violations for various other problems. ACR’s clinical image review for one unit found mammograms acceptable and resulted in ACR accreditation for that unit. FDA issued its warning letter to the facility for the violations found in June 1995. The facility responded by submitting new phantom images and a processor quality control chart for review. FDA notified the facility that its response was inadequate because it did not identify the machine on which the phantom images were taken and it did not include proper paperwork for the processor. ACR clinical image review for another unit found mammograms acceptable, and ACR accreditation was granted for that unit. ACR, at FDA’s request, performed a clinical image review of one set of mammograms for each of two units. The facility responded to FDA’s 11/13/95 letter by sending new phantom images and processor quality control charts. FDA notified the facility that the 11/30/95 response was adequate. FDA did a follow-up reinspection and found 5 level 1 and 7 level 2 phantom image failures using the large image receptor, 11 level 2 phantom image failures using the small image receptor, and numerous other level 2 and level 3 violations. ACR notified the facility and FDA that the clinical images reviewed on 11/20/95 were acceptable. FDA imposed a Directed Plan of Correction requiring the facility to (1) have a medical physicist complete a survey of all units within 30 days, (2) correct problems identified in the survey within 15 business days, (3) perform phantom image evaluation weekly and submit results to FDA monthly, and (4) perform other quality control tests. FDA and state officials met with the facility’s management to discuss the Directed Plan of Correction and to review progress. FDA reinspected the facility and found one level 2 violation involving dark room fog and two level 3 violations in other areas, but no phantom image failures for either large or small image receptors. FDA directed the facility to select a total of 28 sets of clinical images from three time periods between July 1995 and June 1996 for ACR review. ACR review found most of the clinical images were unacceptable. FDA imposed an amended Directed Plan of Correction and obtained agreement from the facility to discontinue performing mammography with the resident radiologic technologists and interpreting physician until they were retrained. All but one of the facility’s radiologic technologists and the interpreting physician completed training, and a new FDA- and ACR-approved technologist was added to the facility’s staff. The facility reopened and reestablished mammography services. FDA notified the facility that ACR would conduct additional clinical image reviews of (1) a sample of clinical images after the personnel had resumed performing mammography for about 1 month and (2) all mammograms taken between June 6, 1996, and September 9, 1996. The facility appealed FDA’s amended Directed Plan of Correction. FDA denied the facility’s appeal. The following are GAO’s additional comments on the letter received from the Food and Drug Administration dated November 18, 1996. FDA commented that the draft report did not discuss the inherent limitations of the phantom image test or the lack of scientific consensus on a test for the large image receptor. While our draft report correctly reflected FDA’s view that the phantom image test is only an indicator of image problems, we agreed to add clarifying information to recognize limitations suggested by FDA. Similarly, we have added clarification to recognize that, according to FDA, developing a standard for the large image receptor would require additional scientific testing. While we recognize that developing guidance for the large image receptor will take time, FDA is in a position to continue to provide leadership in conducting experiments and in building a scientific consensus on a particular test method. FDA commented that our method of presenting aggregate data on the extent of all violations detected during the first and second years of inspections tended to give too much weight to level 3 violations, which FDA characterized as minor. While our report points out that all level 3 violations are not universally regarded as minor, we agree with FDA that aggregating all levels of violations could potentially be misleading. As a result, we have eliminated the aggregate totals from our final report. While FDA acknowledged that there have been some start-up problems with the timely follow-up of violations, it asserted that it now has all necessary procedures in place to follow up on violations. We believe that the lack of an adequate compliance follow-up system has been an ongoing problem. Our contacts with FDA field offices, one as recent as late September 1996, showed the lack of a systematic approach to follow up on previous inspection violations. We agree with FDA, however, that the establishment of its automated Compliance Tracking System has significant potential to alleviate the problems with follow-up. FDA commented that variation among the states in the number of violations reported would be expected because some states had well-developed mammography programs before MQSA and, as a result, presumably would have had fewer violations than other states. In addition, FDA stated that some states may have imposed stricter standards than those provided by MQSA. We agree that there could be variation in frequency of violations among the states attributable to the states’ pre-MQSA experiences with mammography standards. However, the violation data, in our view, are not reported in a consistent enough fashion to sustain such analysis of variation. Moreover, whether states have higher standards than MQSA should not affect violation data if they are correctly reported by the states. States that establish and enforce higher standards than MQSA should, according to FDA’s own guidance, enforce these standards outside of the MQSA process. FDA also commented that our draft report did not accurately reflect the circumstances surrounding FDA’s enforcement in the case of the North Carolina facility. We believe our draft report provided an adequate summary of the key facts in the North Carolina case sufficient to justify our recommendations for additional enforcement procedures, guidance, and training. We note that, after reviewing our draft report, FDA took action to implement our recommendations. However, since FDA believes that additional facts are relevant to the discussion, we have added them to our final report. Specifically, we have (1) added a footnote to the body of the report to explain more fully how FDA reached its conclusion that it would not suspend the facility operation; (2) amended the appendix that contains the chronology of events related to the facility; and (3) as explained above, added information recognizing the limitations of phantom images and clarifying the lack of consensus on available tests for the large image receptor. In addition to those named above, the following individuals made important contributions to this report: Sarah F. Jaggar, Special Advisor for Health Issues; Susan Lawes, Senior Social Science Analyst; Donna Bulvin, Evaluator; Stan Stenersen, Senior Evaluator; Evan Stoll, Computer Specialist; Craig Winslow and Stefanie Weldon, Senior Attorneys; and Clair Hur, Intern. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Food and Drug Administration's (FDA) program for implementing the requirements of the Mammography Quality Standards Act of 1992, focusing on: (1) the extent to which facilities are complying with the new mammography standards; (2) whether FDA's procedures for evaluating image quality at mammography facilities are adequate; and (3) whether FDA's monitoring and enforcement process ensures timely correction of mammography deficiencies. GAO found that: (1) GAO's work points to growing compliance by facilities with FDA's mammography standards; (2) FDA's first annual inspection began in January 1995, and by mid-1996, over 9,000 facilities had been inspected, and approximately 1,500 of these had undergone two rounds of inspections; (3) the first time these 1,500 facilities were evaluated, 26 percent had significant violations, but the second-year inspection revealed that this figure had dropped to about 10 percent; (4) the percentage of facilities with less significant deviations from quality standards had decreased; (5) while these results are positive, GAO did note some differences in how inspectors are conducting inspections that, left unaddressed, could lead to inconsistent reporting of violations, thereby limiting FDA's ability to determine the full effect of the inspection process and to identify the extent of repeat violations; (6) GAO's review of FDA's actions during the first 18 months of its inspection program showed a need for management attention to two additional aspects of the inspection program; (7) FDA's inspection procedures for an important test of mammography equipment were inadequate; (8) the way this test, called the phantom image test, was conducted was open to variability, which could have resulted in differing assessments of how well the equipment functioned; (9) in those instances in which test results showed serious problems with the phantom image quality, FDA's procedures allowed facilities to continue taking mammograms without follow-up to evaluate whether their quality was actually acceptable; (10) without such follow-up review, women are not fully protected from getting poor mammograms from facilities with potentially severe quality problems; (11) at the time of GAO's review, FDA also lacked procedures to ensure that all violations of standards were both corrected and corrected in a timely manner; (12) FDA's program lacked criteria for defining conditions constituting a serious risk to human health, which could delay enforcement of compliance and notification to women who may have received substandard mammograms; (13) for facilities with less severe but persistent violations, FDA's follow-up efforts could not always ensure corrective action was taken; and (14) delays in completing a management information system have kept FDA's compliance staff from having complete, up-to-date information about the compliance status of all mammography facilities.
The mission of NWS—an agency within the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA)—is to provide weather, water, and climate forecasts and warnings for the United States, its territories, and its adjacent waters and oceans, in order to protect life and property and to enhance the national economy. NWS is the official source of aviation- and marine-related weather forecasts and warnings, as well as warnings about life-threatening weather situations. In the 1980s and 1990s, NWS undertook a nationwide modernization program to develop new systems and technologies and to consolidate its field office structure. The goals of the modernization program were to achieve more uniform weather services across the nation, improve forecasts, provide more reliable detection and prediction of severe weather and flooding, permit more cost-effective operations, and achieve higher productivity. The weather observing systems (including radars, satellites, and ground-based sensors) and data processing systems that currently support NWS operations were developed and deployed under the modernization program. During this period, NWS consolidated over 250 large and small weather service offices into the office structure currently in use. The coordinated activities of weather facilities throughout the United States allow NWS to deliver a broad spectrum of climate, weather, water, and space weather services. These facilities include weather forecast offices, river forecast centers, national centers, and aviation center weather service units. The functions of these facilities are described below. 122 weather forecast offices are responsible for providing a wide variety of weather, water, and climate services for their local county warning areas, including advisories, warnings, and forecasts (see fig. 1 for the current location of weather forecast offices). 13 river forecast centers provide river, stream, and reservoir information to a wide variety of government and commercial users as well as to local weather forecast offices for use in flood forecasts and warnings. 9 national centers constitute the National Centers for Environmental Prediction, which provide nationwide computer model output and manual forecast information to all NWS field offices and to a wide variety of government and commercial users. These centers include the Environmental Modeling Center, Storm Prediction Center, Tropical Prediction Center, Climate Prediction Center, Aviation Weather Center, and Space Environment Center, among others. 21 aviation center weather service units, which are co-located with key Federal Aviation Administration (FAA) air traffic control centers across the nation, provide meteorological support to air traffic controllers. To fulfill its mission, NWS relies on a national infrastructure of systems and technologies to gather and process data from the land, sea, and air. NWS collects data from many sources, including ground-based Automated Surface Observing Systems (ASOS), Next Generation Weather Radars (NEXRAD), and operational environmental satellites. These data are integrated by advanced data processing workstations—called Advanced Weather Interactive Processing Systems (AWIPS)—used by meteorologists to issue local forecasts and warnings. The data are also fed into sophisticated computer models running on high-speed supercomputers, which are then used to help develop forecasts and warnings. Figure 2 depicts the integration of the various systems and technologies and is followed by a description of each. NEXRAD is a Doppler radar system that detects, tracks, and determines the intensity of storms and other areas of precipitation, determines wind velocities in and around detected storm events, and generates data and imagery to help forecasters distinguish hazards such as severe thunderstorms and tornadoes. It also provides information about heavy precipitation that leads to warnings about flash floods and heavy snow. The NEXRAD network provides data to other government and commercial users and to the general public via the Internet. The NEXRAD network is made up of 158 operational radars and 8 nonoperational radars that are used for training and testing. Of these, NWS operates 120 radars, the Air Force operates 26 radars, and the FAA operates 12 radars. These radars are located throughout the continental United States and in 17 locations outside the continental United States. Figure 3 shows a NEXRAD radar tower. ASOS is a system of sensors, computers, display units, and communications equipment that automates the ground-based observation and dissemination of weather information nationwide. This system collects data on temperature and dew point, visibility, wind direction and speed, pressure, cloud height and amount, and types and amounts of precipitation. ASOS supports weather forecast activities and aviation operations, as well as the needs of research communities that study weather, water, and climate. Figure 4 is a picture of the system, while figure 5 depicts a configuration of ASOS sensors and describes their functions. There are currently 1,002 ASOS units deployed across the United States, with NWS, FAA, and the Department of Defense (DOD) operating 313, 571, and 118 units, respectively. Although NWS does not own or operate satellites, geostationary and polar- orbiting environmental satellite programs are key sources of data for its operations. NOAA manages the Geostationary Operational Environmental Satellite (GOES) system and the Polar-orbiting Operational Environmental Satellite (POES) system. In addition, DOD operates a different polar satellite program called the Defense Meteorological Satellite Program (DMSP). These satellite systems continuously collect environmental data about the Earth’s atmosphere, surface, cloud cover, and electromagnetic environment. These data are used by meteorologists to develop weather forecasts and other services, and are critical to the early and reliable prediction of severe storms, such as tornadoes and hurricanes. Geostationary satellites orbit above the Earth’s surface at the same speed as the Earth rotates, so that each satellite remains over the same location on Earth. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 6). To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years. Three satellites, GOES-10, GOES- 11, and GOES-12, are currently in orbit. Both GOES-10 and GOES-12 are operational satellites, while GOES-11 is in an on-orbit storage mode. It is a backup for the other two satellites should they experience any degradation in service. The first in the next series of satellites, GOES-13, was launched in May 2006, and the others in the series, GOES-O and GOES-P, are planned for launch over the next few years. In addition, NOAA is planning a future generation of satellites, known as the GOES-R series, which are planned for launch beginning in 2014. Unlike the GOES satellites, which maintain a fixed position above the earth, polar satellites constantly circle the Earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the Earth rotates beneath it, each satellite views the entire Earth’s surface twice a day. Currently, there are four operational polar-orbiting satellites—two are POES satellites and two are DMSP satellites. These satellites are positioned so that they can observe the Earth in early morning, morning, and afternoon polar orbits. Together, they ensure that for any region of the Earth, the data are generally no more than 6 hours old. Figure 7 illustrates the current configuration of operational polar satellites. NOAA and DOD plan to continue to launch remaining satellites in the POES and DMSP programs, with final launches scheduled for 2007 and 2011, respectively. In addition, NOAA, DOD, and the National Aeronautics and Space Administration are planning to replace the POES and DMSP systems with a state-of-the-art environment monitoring satellite system called the National Polar-orbiting Operational Environmental Satellite System (NPOESS). In recent years, we reported on a variety of issues affecting this major system acquisition. AWIPS is a computer system that integrates and displays all hydrometeorological data at NWS field offices. This system integrates data from NEXRAD, ASOS, GOES, and other sources to produce rich graphical displays to aid forecaster analysis and decision making. AWIPS is used to disseminate weather information to the national centers, weather offices, the media, and other federal, state, and local government agencies. NWS deployed hardware and software for this system to weather forecast offices, river forecast centers, and national centers throughout the United States between 1996 and 1999. As a software-intensive system, AWIPS regularly receives software upgrades called “builds.” The most recent build, called Operational Build 6, is currently being deployed. NWS officials estimated that the nationwide deployment of this build should be completed by July 2006. Figure 8 shows a standard AWIPS workstation. Numerical models are advanced software programs that assimilate data from satellites and ground-based observing systems and provide short- and long-term weather pattern predictions. Meteorologists typically use a combination of models and their own experience to develop local forecasts and warnings. Numerical weather models are also a critical source for forecasting weather up to 7 days in advance and forecasting long-term climate changes. One of NWS’s National Centers for Environmental Prediction, the Environmental Modeling Center, is the primary developer of these models within NWS and is responsible for making new and improved models available to regional forecasters via the AWIPS system. Figure 9 depicts model output as shown on an AWIPS workstation. NWS leases high-performance supercomputers to execute numerical calculations supporting weather prediction and climate modeling. In 2002, NWS awarded a $227 million contract to lease high-performance supercomputers to run its environmental models from 2002 through September 2011. Included in this contract are an operational supercomputer used to run numerical weather models, an identical backup supercomputer located at a different site, and a research and development supercomputer on which researchers can test out new analyses and models. The supercomputer lease contract allows NWS to exercise options to upgrade the processing capabilities of the operational supercomputer. During the 1990s, we issued a series of reports on NWS modernization systems and made recommendations to improve them. For example, early in the AWIPS acquisition, we reported that the respective roles and responsibilities of the contractor and government were not clear and that a structured system development environment had not been established. We made recommendations to correct these shortfalls before the system design was approved. We also reported that the ASOS system was not meeting specifications or user needs, and recommended that NWS define and prioritize system corrections and enhancements. On NEXRAD, we reported that selected units were falling short of availability requirements and recommended that NWS analyze and monitor system availability on a site-specific basis and correct any shortfalls. Because of such concerns, we identified NWS modernization as a high-risk information technology initiative in 1995, 1997, and 1999. NWS took a number of actions to address our recommendations and to resolve system risks. For example, NWS enhanced its AWIPS system development processes, prioritized its ASOS enhancements, and improved the availability of its NEXRAD systems. In 2001, because of NWS’s progress in addressing key concerns and in deploying and using the AWIPS system—the final component of its modernization program—we removed the modernization from our high-risk list. In accordance with federal legislation requiring federal managers to focus more directly on program results, NWS established short- and long-term performance goals and regularly tracks its actual performance in meeting these goals. Specifically, NWS established 14 different performance measures—such as lead time for flash floods and false-alarm rates for tornado warnings. It also established 5-year goals for improving its performance in each of the 14 performance measures through 2011. For example, the agency plans to increase its lead time on tornado warnings from 13 minutes in 2005 to 15 minutes in 2011. Table 1 identifies NWS’s 14 performance measures, selected goals, and performance against those goals, when available. Appendix II provides additional information on NWS’s performance goals. NWS periodically adjusts its performance goals as its assumptions change. After reviewing actual results from previous fiscal years and its assumptions about the future, in January 2006, NWS adjusted eight of its 5- year performance goals to make more realistic predictions for performance for the next several years. Specifically, NWS made six performance goals less stringent and two goals more stringent. The six goals that were made less stringent—and the reasons for the changes—are the following: Tornado warning lead time: NWS changed its 2011 goal from 17 to 15 minutes of warning because of delays in deploying new technologies on NEXRAD radars and a lack of access to FAA radar data. Tornado warning false-alarm rate: NWS changed its 2011 goal from a 70 to 74 percent false-alarm rate for the same reasons listed above. Flash flood warning accuracy: NWS changed its 2011 goal from 91 to 90 percent accuracy after delays on two different systems in 2004, 2005, and 2006. Marine wind speed accuracy: NWS changed its 2011 goal from 67 to 59 percent accuracy after experiencing the delay of marine models and datasets, a deficiency of shallow water wave guidance, and a reduction in funds for training. Marine wave height accuracy: NWS changed its 2011 goal from 77 to 69 percent accuracy for the same reasons identified above for marine wind speed accuracy. Aviation instrument flight rule ceiling/visibility: NWS changed its goal from 48 to 47 percent accuracy in 2006 because of a system delay and a reduction in funds for training. Goals for 2007 through 2011 remained the same. Additionally, the following two goals were made more stringent: Aviation instrument flight rule ceiling/visibility false-alarm rate: NWS reduced its expected false-alarm rate from 68 percent to 65 percent for 2006 because of better than anticipated results from the AWIPS aviation forecast preparation system and an aviation learning training course. Goals for the remaining years in the 5-year plan, 2007 to 2011, remained the same. Hurricane track forecasts: NWS changed its 2011 hurricane track forecast goal from 123 to 106 nautical miles after trends in observed data from 1987 to 2004 showed that this measure was improving more quickly than expected. NWS is positioning itself to provide better service through system and technology upgrades. Over the next few years, the agency plans to upgrade and improve its systems, predictive weather models, and computational abilities, and it appropriately links these upgrades to its performance goals. For example, planned improvements in NEXRAD technology are expected to help improve the lead times for tornado warnings, while AWIPS software enhancements are expected to help improve the accuracy of marine weather forecasts. The agency anticipates continued steady improvement in its forecast accuracy as it obtains better observation data, as computational resources are increased, and as scientists are better able to implement advanced modeling and data assimilation techniques. Over the next few years, NWS has plans to spend over $315 million to upgrade its systems, models, and computational abilities. Some planned upgrades are to maintain the weather system infrastructure (either to replace obsolete and difficult-to-maintain parts or to refresh aging hardware and workstations), while others are to take advantage of new technologies. Often, the infrastructure upgrades allow NWS to take advantage of newer technologies. For example, the replacement of an aging and proprietary NEXRAD subsystem is expected to allow the agency to implement enhancements in image resolution. Key planned upgrades for each of NWS’s major systems and technologies are listed below. NWS has initiated two major NEXRAD improvements. It is currently replacing an outdated subsystem—the radar data acquisition subsystem— with current hardware that is compliant with open system standards. This new hardware is expected to enable important software upgrades. In addition, NWS plans to add a new technology called dual polarization to this subsystem, which will provide more accurate rainfall estimates and differentiate various forms of precipitation. Table 2 shows the details of these two projects. NWS has seven ongoing and planned improvements for its ASOS system (see table 3). Many of these improvements are to replace aging parts and are expected to make the system more reliable and maintainable. Key subsystem replacements—including the all-weather precipitation accumulation gauge—are also expected to result in more accurate measurements. Selected AWIPS system components have become obsolete, and NWS is replacing these components. In 2001, NWS began to migrate the existing Unix-based systems to a Linux system to reduce its dependence on any particular hardware platform. NWS expects this project, combined with upgraded information technology, to delay the need for a major information technology replacement. Table 4 shows planned improvements for the AWIPS system. NWS plans to continue to improve its modeling capabilities by (1) better assimilating data from improved observation systems such as ASOS, NEXRAD, and environmental satellites; (2) developing and implementing an advanced global forecasting model (called the Weather Research and Forecast model) to allow forecasters to look at a larger domain area; (3) implementing a hurricane weather research forecast model; and (4) improving ensemble modeling, which involves running a single model multiple times with slight variations on a variable to get a probability that a given forecast is likely to occur. NWS expects to spend approximately $12.7 million in fiscal year 2006 to improve its weather and real-time ocean models. NWS is planning to exercise an option within its existing supercomputer lease to upgrade its computing capabilities to allow more advanced numerical weather and climate prediction modeling. In accordance with federal legislation and policy, NWS’s planned upgrades to its systems and technologies are expected to result in improved service. The Government Performance and Results Act calls for federal managers to develop strategic performance goals and to focus program activities on obtaining results. Also, the Office of Management and Budget (OMB) requires agencies to justify major investments by showing how they support performance goals. NOAA and NWS implement the act and OMB guidance by requiring project officials to describe how planned system and technology upgrades are linked to the agency’s programmatic priorities and performance measures. Further, in its annual performance plans, NOAA reports on expected NWS service improvements and identifies the technologies and systems that are expected to help improve them. NWS service improvements are often expected through a combination of system and technology improvements. For example, NWS expects to reduce its average error in forecasting a hurricane’s path by approximately 20 nautical miles between 2005 and 2011 through a combination of upgrades to observation systems, better hurricane forecast models, enhancements to the computer infrastructure, and research that will be transferred to NWS forecast operations. Also, NWS expects tornado warning lead times to increase from 13 to 15 minutes by the end of fiscal year 2008 after NWS completes retrofits to the NEXRAD systems, realizes the benefits of AWIPS software enhancements, and implements new training techniques. Table 5 provides a summary of how system upgrades are expected to result in service improvements. NWS provides employee training courses that are expected to help improve forecast service performance, but the agency’s process for selecting this training lacks sufficient oversight. Each year, NWS identifies its training needs and develops this training in order to enhance its services. NWS develops an annual training and education plan identifying planned training, how this training supports key criteria, and associated costs for the upcoming year. To develop the annual plan, program area teams, with representatives from NWS headquarters and field offices, prioritize and submit training recommendations. Each submission identifies how the training will support up to eight different criteria— including the course’s effect on NWS forecasting performance measures, NOAA strategic goals, ensuring operational continuity, and providing customer outreach. These submissions are screened by a training and education team, and depending on available resources, selected for development (if not pre-existing) and implementation. The planned training courses are then delivered through a variety of means, including courses at the NWS training center, online training, and training at local forecast offices. In its 2006 training process, 25 program area teams identified 134 training needs, such as training on how to more effectively use AWIPS, training on an advanced weather simulator, and training on maintaining ASOS systems. Given an expected funding level of $6.1 million, the training and education team then selected 68 of these training needs for implementation. NWS later identified another 5 training needs and allocated an additional $1.25 million to its training budget. In total, NWS funded 73 of 139 training courses. The majority of planned training courses demonstrate a clear link to expected forecasting service improvements. For example, NWS developed a weather event simulator to help forecasters improve their tornado warning lead times. In addition, AWIPS-related training courses are expected to help improve each of the agency’s 14 forecasting performance measures by teaching forecasters advanced techniques in using the integrated data processing workstations. However, NWS’s process for selecting which training courses to implement lacks sufficient oversight. In justifying training courses, program officials routinely link proposed courses to NWS forecast performance measures. Specifically, in 2006, 131 of the 134 original training needs were linked to expectations for improved forecasting performance—including training on cardiopulmonary resuscitation, spill prevention, leadership, systems security, and equal employment opportunity/diversity. The training selection process did not validate or question that these courses would improve tornado warning lead times or hurricane warning accuracy. Although these courses are important and likely justifiable on other bases, the overuse of this justification undermines the distinctions among training courses and the credibility of the course selection process. Additionally, because the training selection process does not clearly distinguish among courses, it is difficult to determine whether sufficient funds are dedicated to the courses that are expected improve performance. NWS training officials acknowledged that some of the course justifications seem questionable and that more needs to be done to strengthen the training selection process to ensure oversight of the justification and prioritization process. They noted that the training division plans to improve the training selection process over the next few years by adding a more systematic worker-focused assessment of training needs, better prioritizing strategic and organizational needs, and initiating post- implementation reviews. However, until NWS establishes a training selection process that uses reliable justification and results in understandable decisions, NWS risks selecting courses that do not most effectively support its training goals. NWS plans to develop a prototype of a new concept of operations—an effort that could affect its national office configuration, including the location and functions of its offices nationwide. However, NWS has yet to determine many details about the impact of any proposed changes on NWS forecast services, staffing, and budget. Further, NWS has not yet identified key activities, timelines, or measures for evaluating the concept of operations prototype. As a result, it is not evident that NWS will collect the information it needs on the impact and benefits of any office restructuring in order to make sound and cost-effective decisions. According to agency officials, over the last several years, NWS’s corporate board noted that the constrained budget, high labor costs, difficulty in training and developing its employees, and a lack of flexibility in how the agency was operating were making it more difficult for the agency to continue to perform its mission. In August 2005, the board chartered a working group to evaluate the roles, responsibilities, and functions of weather offices nationwide and to make a proposal for a new concept of operations. The group was given a set of guiding principles, including that the proposed concept should (1) be cost effective, (2) ensure that there would be no degradation of service, (3) ensure that weather services nationwide were equitable, and (4) not reduce the number of forecast offices nationwide. In addition, the working group was instructed not to address grade structure, staffing levels, office sizes, or overall organizational chart structure. The group gathered input from various agency stakeholders and other partners within NOAA and considered multiple alternatives. They dismissed all but one of the alternative concepts because they were not consistent with the guiding principles. In its December 2005 proposal, the working group proposed a “clustered peer” office plan designed to redistribute some functions among various offices, particularly when there is a high-intensity weather event. An agency official explained that each weather forecast office currently has a fixed geographic area for which it provides forecasts. If a severe weather event occurs, forecast offices ask their staff to work overtime so that there are enough personnel available to do both the normal forecasting work and the watches and warnings required by the severe event. If a local office becomes unable to provide forecast and warning functions, an adjacent office will temporarily assume those duties by calling in extra personnel to handle the workload of both offices. Alternatively, under a clustered peer office structure, several offices with the same type of weather and warning responsibilities, climate, and customers would be grouped in a cluster. Offices within a cluster would share the workload associated with routine services, such as 7-day forecasts. During a high-impact weather event—such as a severe storm, flood, or wildfire—the offices would redistribute the workload to allow the impacted office to focus solely on the event, while the other offices in the cluster would pick up the impacted office’s routine services. In this way, peer offices could help supplement staffing needs and the workload across multiple offices could be more efficiently balanced. After receiving this proposal, the NWS corporate board chartered another team to develop a prototype of the clustered peer idea to evaluate the benefits of this approach. The team plans to recommend the scope of the prototype and select several weather offices for the prototype demonstration by the end of September 2006. It also plans to conduct the prototype demonstration in fiscal years 2007 and 2008. Initial prototype results are due in fiscal year 2009. Many details about the impact of the changes on NWS forecast services, staffing, and budget have yet to be determined. Sound decision making on moving forward with a new concept of operations will require data on the relative costs, benefits, and impacts of such a change, but at this time the implications of NWS’s revised concept of operations on staffing, budget, and forecasting services are unknown. The charter for the team developing the prototype for the new concept of operations calls for it to identify metrics for evaluating the prototype and to define mechanisms for obtaining customer feedback. However, the team has not yet established a plan or timeline for developing these metrics or mechanisms. Further, it is not yet evident that these metrics will include the relative costs, benefits, or impacts of this change or which customers will be offered the opportunity to provide feedback. This is not consistent with the last time NWS undertook a major change to its concept of operations—during its modernization in the mid-1990s. During that effort, the agency developed a detailed process for identifying impacts and ensuring that there would be no degradation of service (see app. III for a summary of this prior process). Until it establishes plans, timelines, and metrics for evaluating its prototype of a revised concept of operations, NWS is not able to ensure that it is on track to gather the information it needs to fully evaluate the merits of the revised concept of operations and to make sound and informed decisions on a new office configuration. NWS is appropriately positioning itself to improve its forecasting services by upgrading its systems and technologies and by developing training to enhance the performance of its professional staff. Over the next few years, NWS expects to improve all of its 14 performance measures—ranging from seasonal temperature forecasts, to severe weather warnings, to specialized aviation and marine weather warnings. However, it is not clear that NWS is consistently choosing the best training courses to improve its performance because the training selection process does not rigorously review the training justifications. Recognizing that high labor costs, difficulty in training and developing its employees, and a constrained budget environment make it difficult to fulfill its mission, NWS is evaluating changes to its office structure and operations in order to achieve greater productivity and efficiency. It plans to develop a prototype of a new concept of operations that entails sharing responsibilities among a cluster of offices. Because it is early in the prototype process, the implications of these plans on staffing, budget, and forecasting services are unknown at this time. However, NWS does not yet have detailed plans, timelines, or measures for assessing the prototype. As a result, NWS risks not gathering the information it needs to make an informed decision in moving forward with a new office operational structure. To improve NWS’s ability to achieve planned service improvements, we recommend that the Secretary of Commerce direct the Assistant Administrator for Weather Services to take the following three actions: require training officials to validate the accuracy of training justifications; establish key activities, timelines, and measures for evaluating the “clustered peer” office structure prototype before beginning the prototype; and ensure that plans for evaluating the prototype address the impact of any changes on budget, staffing, and services. We received written comments on a draft of this report from the Department of Commerce (see app. IV). In the department’s response, the Deputy Secretary of Commerce agreed with our recommendations and identified plans for implementing them. Specifically, the department noted that it plans to revise its training process to ensure limited training resources continue to target improvements in NWS performance. The department also noted that the concept of operations working team is developing a plan for the prototype and stated that this plan will include the items we recommended. The department also provided technical corrections, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Director of the Office of Management and Budget, and other interested congressional committees. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were (1) to evaluate the National Weather Service’s (NWS) efforts to achieve improvements in the delivery of its services through upgrades to its systems, models, and computational abilities; (2) to assess the agency’s plans to achieve improvements in the delivery of its services through the training and professional development of its employees; and (3) to evaluate the agency’s plans for revising its nationwide office configuration and the implications of these plans on local forecasting services, staffing, and budgets. To evaluate NWS’s efforts to achieve service improvements through system and technology upgrades, we reviewed the agency’s system development plans and discussed system-specific plans with NWS program officials. We assessed system-specific documentation justifying system upgrades to evaluate whether these upgrades were linked to anticipated improvements in performance goals. We also evaluated NWS performance goals and identified the extent to which anticipated service improvements were tied to system and technology upgrades. We interviewed National Oceanic and Atmospheric Administration (NOAA) and NWS officials to obtain clarification on agency plans and goals. To assess NWS’s plans for achieving service improvements through the training and professional development of its employees, we reviewed NWS policies and plans for training and professional development. We reviewed the agency’s service performance goals and assessed the link between those goals and planned and expected training and professional development activities. We also interviewed NWS officials responsible for training and professional development activities. To evaluate the status and potential impact of any plans to revise the national office configuration, we assessed studies of options for changing the NWS concept of operations. We also reviewed the charter for the prototype and interviewed key NWS officials to determine the possible effect of these plans on local forecasting services, staffing, and budgets and to identify plans for determining the implications of changing to a new concept of operations. We performed our work at NWS headquarters in the Washington, D.C., metropolitan area, and at geographically diverse NOAA and NWS weather forecast offices in Denver and in Tampa, and at the NWS National Hurricane Center in Miami. We performed our work from October 2005 to June 2006 in accordance with generally accepted government auditing standards. A measure of the difference between the projected locations of the center of storms and the actual locations in nautical miles for the Atlantic Basin miles, respectively, and ceilings and visibilities are greater than, or eual to, 500 feet and/or 1 mile, respectively. In the 1980s, NWS began a nationwide modernization program to upgrade weather observing systems such as satellites and radars, to design and develop advanced computer workstations for forecasters, and to reorganize its field office structure. The goals of the modernization were to achieve more uniform weather services across the nation, improve forecasting, provide more reliable detection and prediction of severe weather and flooding, achieve higher productivity, and permit more cost- effective operations through staff and office reductions. NWS’s plans for revising its office structure were governed by the Weather Service Modernization Act, which required that, prior to closing a field office, the Secretary of Commerce certify that there was no degradation of service. NWS developed a plan for complying with the law. To identify community concerns regarding modernization changes and to study the potential for degradation of service, the Department of Commerce published a notice in the Federal Register requesting comments on service areas where it was believed that services could be degraded by planned modernization changes. The department also contracted for an independent assessment by the National Research Council on whether weather services would be degraded by the proposed changes. As part of this assessment, the contractor developed criteria to identify whether service would be degraded in certain areas of concern. The department then applied these criteria to areas of concern to determine whether services would be degraded or not. Before closing any office, the Secretary of Commerce certified that services would not be degraded. David A. Powner, (202) 512-9286 or pownerd@gao.gov. In addition to the contact named above, William Carrigg, Barbara Collier, Neil Doherty, Kathleen S. Lovett, Colleen Phillips, Karen Talley, and Jessica Waselkow made key contributions to this report.
To provide accurate and timely weather forecasts, the National Weather Service (NWS) uses systems, technologies, and manual processes to collect, process, and disseminate weather data to its nationwide network of field offices and centers. After completing a major modernization program in the 1990s, NWS is seeking to upgrade its systems with the goal of improving its forecasting abilities, and it is considering changing how its nationwide office structure operates in order to enhance efficiency. GAO was asked to (1) evaluate NWS's efforts to achieve improvements in the delivery of its services through system and technology upgrades, (2) assess agency plans to achieve service improvements through training its employees, and (3) evaluate agency plans to revise its nationwide office configuration and the implications of these plans on local forecasting services, staffing, and budgets. NWS is positioning itself to provide better service through over $315 million in planned upgrades to its systems and technologies. In annual plans, the agency links expected improvements in its service performance measures with the technologies and systems expected to improve them. For example, NWS expects to reduce the average error in its forecasts of hurricane paths by approximately 20 nautical miles between 2005 and 2011 through a combination of upgrades to observation systems, better hurricane forecast models, enhancements to the computer infrastructure, and research that will be transferred to forecast operations. Also, NWS expects to increase tornado warning lead times from 13 to 15 minutes by the end of fiscal year 2008 after the agency completes an upgrade to its radar system and realizes benefits from software improvements to its forecaster workstations. NWS also provides training courses for its employees to help improve its forecasting services, but the agency's process for selecting training lacks sufficient oversight. Program officials propose and justify training needs on the basis of up to eight different criteria--including whether a course is expected to improve NWS forecasting performance measures, support customer outreach, or increase scientific awareness. Many of these course justifications appropriately demonstrate support for improved forecasting performance. For example, training on how to more effectively use forecaster workstations is expected to help improve tornado and hurricane warnings. However, in justifying training courses, program officials routinely link courses to NWS forecasting performance measures. For example, in 2006, almost all training needs were linked to expectations for improved performance--including training on cardiopulmonary resuscitation, spill prevention, and systems security. The training selection process did not validate or question how these courses could help improve weather forecasts. Overuse of this justification undermines the distinctions among different training courses and the credibility of the course selection process. Additionally, because the training selection process does not clearly distinguish among courses, it is difficult to determine whether sufficient funds are dedicated to the courses that are expected to improve performance. To improve its efficiency, NWS plans to develop a prototype of a new concept of operations, an effort that could affect its national office configuration, including the location and functions of its offices nationwide. However, many details about the impact of any proposed changes on NWS forecast services, staffing, and budget have yet to be determined. Further, the agency has not yet determined key activities, timelines, or measures for evaluating the prototype of the new office operational structure. As a result, it is not evident that NWS will collect the information it needs on the impact and benefits of any office restructuring in order to make sound and cost-effective decisions.
The basic process by which all federal agencies typically develop and issue regulations is set forth in the Administrative Procedure Act (APA), and is generally known as the rulemaking process. Rulemaking at most regulatory agencies follows the APA’s informal rulemaking process, also known as “notice and comment” rulemaking, which generally requires agencies to publish a notice of proposed rulemaking in the Federal Register, provide interested persons an opportunity to comment on the proposed regulation, and publish the final regulation, among other things. Under the APA, a person adversely affected by an agency’s notice and comment rulemaking is generally entitled to judicial review of that new rule, and a court may invalidate the regulation if it finds it to be “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,” sometimes referred to as the arbitrary and capricious test. In addition to the requirements of the APA, federal agencies typically must comply with requirements imposed by certain other statutes and executive orders. In accordance with various presidential executive orders, agencies work closely with staff from the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs, who review draft regulations and other significant regulatory actions prior to publication.requirements that affect OSHA standard setting were established in 1980 or later. The process OSHA uses to develop and issue standards is spelled out in the OSH Act. Section 6(b) of the act specifies the procedures OSHA must These procedures use to promulgate, modify, or revoke its standards. include publishing the proposed rule in the Federal Register, providing interested persons an opportunity to comment, and holding a public hearing upon request. Section 6(a) of the act directed the Secretary of Labor (through OSHA) to adopt any national consensus standards or established federal standards as safety and health standards within 2 years of the date the OSH Act went into effect, without following the procedures set forth in section 6(b) or the APA.publication, the vast majority of these standards have not changed since originally adopted, despite significant advances in technology, equipment, and machinery over the past several decades. In leading the agency’s standard-setting process, staff from OSHA’s Directorate of Standards and Guidance, in collaboration with staff from other Labor offices, explore the According to an OSHA appropriateness and feasibility of developing standards to address workplace hazards that are not covered by existing standards. Once OSHA initiates such an effort, an interdisciplinary team typically composed of at least five staff focus on that issue. We analyzed the 58 significant health and safety standards OSHA issued between 1981 and 2010 and found that the time frames for developing and issuing them averaged about 93 months (7 years, 9 months), and ranged from 15 months to about 19 years (see table 1). During this period, OSHA staff also worked to develop standards that have not yet been finalized. For example, according to agency officials, OSHA staff have been working on developing a silica standard since 1997, a beryllium standard since 2000, and a standard on walking and working surfaces since 2003. For a depiction of the timelines for safety and health standards issued between 1981 and 2010, see appendix I. These analyses are necessary because the Supreme Court has held that the OSH Act requires that standards be both technologically and economically feasible. Am. Textile Mfrs. Inst. v. Donovan, 452 U.S. 490, 513 n.31 (1981). businesses. According to agency officials, the small business panel process takes about 8 months of work, and OSHA is one of only three federal agencies that is subject to this requirement. Experts and agency officials also told us that changing priorities are a factor that affects the time frames for developing and issuing standards, explaining that priorities may change as a result of changes within OSHA, Labor, Congress, or the presidential administration. Some agency officials and experts told us such changes often cause delays in the process of setting standards. For example, some experts noted that the agency’s intense focus on publishing an ergonomics rule in the 1990s took attention away from several other standards that previously had been a priority. The standard of judicial review that applies to OSHA standards if they are challenged in court also affects OSHA’s time frames because it requires more robust research and analysis than the standard that applies to many other agencies’ regulations, according to some experts and agency officials. Instead of the arbitrary and capricious test provided for under the APA, the OSH Act directs courts to review OSHA’s standards using a more stringent legal standard: it provides that a standard shall be upheld if supported by “substantial evidence in the record considered as a According to OSHA officials, this more stringent standard whole.” (known as the “substantial evidence” standard) requires a higher level of scrutiny by the courts and as a result, OSHA staff must conduct a large volume of detailed research in order to understand all industrial processes involved in the hazard being regulated, and to ensure that a given hazard control would be feasible for each process. According to OSHA officials and experts, two additional factors result in an extensive amount of work for the agency in developing standards: Substantial data challenges, which stem from a dearth of available scientific data for some hazards and having to review and evaluate scientific studies, among other sources. In addition, according to agency officials, certain court decisions interpreting the OSH Act require rigorous support for the need for and feasibility of standards. 29 U.S.C. § 655(f). An example of one such decision cited by agency officials is a 1980 Supreme Court case, which resulted in OSHA having to conduct quantitative risk assessments for each health standard and ensure that these assessments are supported by substantial evidence. Response to adverse court decisions. Several experts with whom we spoke observed that adverse court decisions have contributed to an institutional culture in the agency of trying to make OSHA standards impervious to future adverse decisions. However, agency officials said that, in general, OSHA does not try to make a standard “bulletproof” because, while OSHA tries to avoid lawsuits that might ultimately invalidate the standard, the agency is frequently sued. For example, in the “benzene decision,” the Supreme Court invalidated OSHA’s revised standard for benzene because the agency failed to make a determination that benzene posed a “significant risk” of material health impairment under workplace conditions permitted by the Another example is a 1992 decision in which a current standard. U.S. Court of Appeals struck down an OSHA health standard that would have set or updated the permissible exposure limit for over 400 air contaminants. OSHA has not issued any emergency temporary standards in nearly 30 years, citing, among other reasons, legal and logistical challenges. OSHA officials noted that the emergency temporary standard authority remains available, but the legal requirements to issue such a standard— demonstrating that workers are exposed to grave danger and establishing that an emergency temporary standard is necessary to protect workers from that grave danger—are difficult to meet. Similarly difficult to meet, according to officials, is the requirement that an emergency temporary standard must be replaced within 6 months by a permanent standard issued using the process specified in section 6(b) of the OSH Act. OSHA uses enforcement and education as alternatives to issuing emergency temporary standards to respond relatively quickly to urgent workplace hazards. OSHA officials consider their enforcement and education activities complementary. It its enforcement efforts to address urgent hazards, OSHA uses the general duty clause of the OSH Act, which requires employers to provide a workplace free from recognized hazards that are causing, or are likely to cause, death or serious physical Under the general duty clause, OSHA has the harm to their employees. authority to issue citations to employers even in the absence of a specific standard under certain circumstances. Along with its enforcement and standard-setting activities, OSHA also educates employers and workers to promote voluntary protective measures against urgent hazards. OSHA’s education efforts include on-site consultations and publishing health and safety information on urgent hazards. For example, if its inspectors discover a particular hazard, OSHA may send letters to all employers where the hazard is likely to be present to inform them about the hazard and their responsibility to protect their workers. 29 U.S.C. § 654(a)(1). Although the rulemaking experiences of EPA and MSHA shed some light on OSHA’s challenges, their statutory framework and resources differ too markedly for them to be models for OSHA’s standard–setting process. For example, EPA is directed to regulate certain sources of specified air pollutants and review its existing regulations within specific time frames under section 112 of the Clean Air Act, which EPA officials told us gave the agency clear requirements and statutory deadlines for regulating hazardous air pollutants. MSHA benefits from a narrower scope of authority than OSHA and has more specialized expertise as a result of its more limited jurisdiction and frequent on-sight presence at mines. Officials at MSHA, OSHA, and Labor noted that this is very different from OSHA, which oversees a vast array of workplaces and types of industries and must often supplement the agency’s inside knowledge by conducting site visits. Agency officials and occupational safety and health experts shared their understanding of the challenges facing OSHA and offered ideas for improving the agency’s standard-setting process.involve substantial procedural changes that may be beyond the scope of OSHA’s authority and require amending existing laws, including the OSH Act. Improve coordination with other agencies: Experts and agency officials noted that OSHA has not fully leveraged available expertise at other federal agencies, especially NIOSH, in developing and issuing its standards. OSHA officials said the agency considers NIOSH’s input on an ad hoc basis but OSHA staff do not routinely work closely with NIOSH staff to analyze risks of occupational hazards. They stated that collaborating with NIOSH on risk assessments, and generally in a more systematic way, could reduce the time it takes to develop a standard by several months, thus facilitating OSHA’s standard-setting process. Expand use of voluntary consensus standards: According to OSHA officials, many OSHA standards incorporate or reference outdated consensus standards, which could leave workers exposed to hazards that are insufficiently addressed by OSHA standards that are based on out-of-date technology or processes. Experts suggested that Congress pass new legislation that would allow OSHA, through a single rulemaking effort, to revise standards for a group of health hazards using current industry voluntary consensus standards, eliminating the requirement for the agency to follow the standard- setting provisions of section 6(b) of the OSH Act or the APA. One potential disadvantage of this proposal is that any abbreviation to the regulatory process could also result in standards that fail to reflect relevant stakeholder concerns, such as an imposition of unnecessarily burdensome requirements on employers. Impose statutory deadlines: OSHA officials indicated that it can be difficult to prioritize standards due to the agency’s numerous and sometimes competing goals. In the past, having a statutory deadline, combined with relief from procedural requirements, resulted in OSHA issuing standards more quickly. However, some legal scholars have noted that curtailing the current rulemaking process required by the APA may result in fewer opportunities for public input and possibly decrease the quality of the standard. Also, officials from MSHA told us that, while statutory deadlines make its priorities clear, this is sometimes to the detriment of other issues that must be set aside in the meantime. Change the standard of judicial review: Experts and agency officials suggested OSHA’s substantial evidence standard of judicial review be replaced with the arbitrary and capricious standard, which would be more consistent with other federal regulatory agencies. The Administrative Conference of the United States has recommended that Congress amend laws that mandate use of the substantial evidence standard, in part because it can be unnecessarily burdensome for agencies. As a result, changing the standard of review to “arbitrary and capricious” could reduce the agency’s evidentiary burden. However, if Congress has concerns about OSHA’s current regulatory power, it may prefer to keep the current standard of review. Allow alternatives for supporting feasibility: Experts suggested that OSHA minimize on-site visits—a time-consuming requirement for analyzing the technological and economic feasibility of new or updated standards—by using surveys or basing its analyses on industry best practices. One limitation to surveying worksites is that, according to OSHA officials, in-person site visits are imperative for gathering sufficient data in support of most health standards. Basing feasibility analyses on industry best practices would require a statutory change, as one expert noted, and would still require OSHA to determine feasibility on an industry-by-industry basis. Adopt a priority-setting process: Experts suggested that OSHA develop a priority-setting process for addressing hazards, and as GAO has reported, such a process could lead to improved program results. OSHA attempted such a process in the past, which allowed the agency to articulate its highest priorities for addressing occupational hazards. Reestablishing such a process may improve a sense of transparency among stakeholders and facilitate OSHA management’s ability to plan its staffing and budgetary needs. However, it may not immediately address OSHA’s challenges in expeditiously setting standards because such a process could take time and would require commitment from agency management. The process for developing new and updated safety and health standards for occupational hazards is a lengthy one and can result in periods when there are insufficient protections for workers. Nevertheless, any streamlining of the current process must guarantee sufficient stakeholder input to ensure that the quality of standards does not suffer. Additional procedural requirements established since 1980 by Congress and various executive orders have increased opportunities for stakeholder input in the regulatory process and required agencies to evaluate and explain the need for regulations, but they have also resulted in a more protracted rulemaking process for OSHA and other regulatory agencies. Ideas for changes to the regulatory process must weigh the benefits of addressing hazards more quickly against a potential increase in the regulatory burden imposed on the regulated community. Most methods for streamlining that have been suggested by experts and agency officials are largely outside of OSHA’s authority because many procedural requirements are established by federal statute or executive order. However, OSHA can coordinate more routinely with NIOSH on risk assessments and other analyses required to support the need for standards, saving OSHA time and expense. In our report being released today, we recommend that OSHA and NIOSH more consistently collaborate on researching occupational hazards so that OSHA can more effectively leverage NIOSH expertise in its standard-setting process. Both agencies agreed with this recommendation. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this testimony, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include, Gretta L. Goodwin, Assistant Director; Susan Aschoff; Tim Bober; Anna Bonelli; Sarah Cornetto; Jessica Gray; and Sara Pelton. The following two figures (fig. 2 and fig. 3) depict a timeline for each of the 58 significant safety and health standards OSHA issued between 1981 and 2010.
This testimony discusses the challenges the Department of Labor’s (Labor) Occupational Safety and Health Administration (OSHA) faces in developing and issuing safety and health standards. Workplace safety and health standards are designed to help protect over 130 million public and private sector workers from hazards at more than 8 million worksites in the United States, and have been credited with helping prevent thousands of work-related deaths, injuries, and illnesses. However, questions have been raised concerning whether the agency’s approach to developing standards is overly cautious, resulting in too few standards being issued. Others counter that the process is intentionally deliberative to balance protections provided for workers with the compliance burden imposed on employers. Over the past 30 years, various presidential executive orders and federal laws have added new procedural requirements for regulatory agencies, resulting in multiple and sometimes lengthy steps OSHA and other agencies must follow. The remarks today are based on findings from our report, which is being released today, entitled "Workplace Safety and Health: Multiple Challenges Lengthen OSHA’s Standard Setting." For this report, we were asked to review: (1) the time taken by OSHA to develop and issue occupational safety and health standards and the key factors that affect these time frames, (2) alternatives to the typical standard-setting process that are available for OSHA to address urgent hazards, (3) whether rulemaking at other regulatory agencies offers insight into OSHA’s challenges with setting standards, and (4) ideas that have been suggested by occupational safety and health experts for improving the process. In summary, we found that, between 1981 and 2010, the time it took OSHA to develop and issue safety and health standards ranged from 15 months to 19 years and averaged more than 7 years. Experts and agency officials cited several factors that contribute to the lengthy time frames for developing and issuing standards, including increased procedural requirements, shifting priorities, and a rigorous standard of judicial review. We also found that, in addition to using the typical standard-setting process, OSHA can address urgent hazards by issuing emergency temporary standards, although the agency has not used this authority since 1983 because of the difficulty it has faced in compiling the evidence necessary to meet the statutory requirements. Instead, OSHA focuses on enforcement activities—such as enforcing the general requirement of the Occupational Safety and Health Act of 1970 (OSH Act) that employers provide a workplace free from recognized hazards—and educating employers and workers about urgent hazards. Experiences of other federal agencies that regulate public or worker health hazards offered limited insight into the challenges OSHA faces in setting standards. For example, EPA officials pointed to certain requirements of the Clean Air Act to set and regularly review standards for specified air pollutants that have facilitated the agency’s standard-setting efforts. In contrast, the OSH Act does not require OSHA to periodically review its standards. Also, MSHA officials noted that their standard-setting process benefits from both the in-house knowledge of its inspectors, who inspect every mine at least twice yearly, and a dedicated mine safety research group within the National Institute for Occupational Safety and Health (NIOSH), a federal research agency that makes recommendations on occupational safety and health. OSHA must instead rely on time-consuming site visits to obtain information on hazards and has not consistently coordinated with NIOSH to assess occupational hazards. Finally, experts and agency officials identified several ideas that could improve OSHA’s standard-setting process. In our report being released today, we draw upon one of these ideas and recommend that OSHA and NIOSH more consistently collaborate on researching occupational hazards so that OSHA can more effectively leverage NIOSH expertise in its standard-setting process.
Federal funding for highways is provided to the states mostly through a series of grant programs collectively known as the Federal-Aid Highway Program. Periodically, Congress enacts multiyear legislation that authorizes the nation’s surface transportation programs. In 2005, Congress enacted SAFETEA-LU, which authorized $197.5 billion for the Federal-Aid Highway Program from fiscal years 2005 through 2009. In a joint federal- state partnership the FHWA, within the Department of Transportation (DOT), administers the Federal-Aid Highway Program and distributes most funds to the states through annual apportionments established by statutory formulas. Once FHWA apportions these funds, the funds are available for states to obligate for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other purposes authorized in law. The amount of federal funding made available for highways was substantial—from $34.4 to $43.0 billion per year for fiscal years 2005 through 2009. The Highway Trust Fund was instituted by Congress in 1956 to construct the Interstate Highway System, which is currently 47,000 miles in length. The Highway Trust Fund holds certain excise taxes collected on motor fuels and truck-related taxes, including taxes on gasoline, diesel fuel, gasohol, and other fuels; truck tires and truck sales; and heavy vehicle use. In 1983, the fund was divided into the Highway Account and the Mass Transit Account. More than 80 percent of the total fund is the Highway Account, including a majority of the fuel taxes as well as all truck-related taxes (see fig. 1). Most Highway Account funds (about 83 percent) were apportioned to states across 13 formula programs during the 4 years of the SAFETEA-LU period for which data are available. Included among these 13 programs are 6 “core” highway programs (see table 1). In addition to formula programs, for the time during the SAFETEA-LU period for which final data are available: Congress directly allocated about 8 percent of Highway Account funds to state departments of transportation through congressionally directed High Priority Projects. The remaining funds, about 9 percent of the total, represent dozens of other authorized programs allocated to state DOTs, congressionally directed projects other than High Priority Projects, administrative expenses and funding provided to states by other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2). Some of the apportioned programs use states’ contributions to the Highway Account of the Highway Trust Fund as a factor in determining program funding levels for each state. Because the Department of Treasury (Treasury) collects fuel taxes from a small number of corporations located in a relatively small number of places—not from states—FHWA has to estimate the fuel tax contributions made to the fund by users in each state. Likewise, FHWA must estimate the state of origin of various truck taxes. FHWA calculates motor fuel-related contributions based on estimates of the gallons of fuel used on highways in each state. To do so, FHWA relies on data gathered from state revenue agencies and summary tax data available from Treasury as part of the estimation process (see app. II). Because the collection and estimation process takes place over several years (see fig. 3), the data used to calculate the formula are 2 years old. For example, the data used to apportion funding to states in fiscal year 2009 were based on estimated collections attributable to each state in fiscal year 2007. By the early 1980s, construction of the Interstate Highway System was nearing completion, and a larger portion of the funds from the Highway Trust Fund were being authorized for non-Interstate programs. The Surface Transportation Assistance Act of 1982 provided, for the first time, that each state would for certain programs receive a “minimum allocation” of 85 percent of its share of estimated tax payments to the Highway Account of the Highway Trust Fund. This approach was largely retained when Congress reauthorized the program in 1987. The Intermodal Surface and Transportation Efficiency Act of 1991 (ISTEA) raised the minimum allocation to 90 percent. The Transportation Equity Act for the 21st Century (TEA-21) of 1997 guaranteed each state a specific share of the total program (defined as all apportioned programs plus High Priority Projects), a minimum 90.5 percent share of contributions. It also introduced rate-of-return considerations into funds states received for the Interstate Maintenance, National Highway System, and Surface Transportation Programs. In 2005, Congress implemented through SAFETEA-LU the Equity Bonus Program that was designed to bring all states up to a guaranteed rate of return of 92 percent by fiscal year 2008. For the time period for which final data are available, fiscal years 2005 through 2008, our analysis shows that every state but one received more funding for highway programs than users contributed to the Highway Account (see fig. 4). The only exception, Texas, received about $1.00 (99.7 cents) for each dollar contributed. Among other states, this ranged from a low of $1.02 for both Arizona and Indiana to a high of $5.63 for the District of Columbia. In addition, all states, including Texas, received more in funding than their highway users contributed during both fiscal years 2007 and 2008. In effect, almost every state was a donee state during the first 4 years of SAFETEA-LU. This occurred because overall, more funding was authorized and apportioned than was collected from highway users. The account was supplemented by general funds from the Treasury. Our rate-of-return analysis has two notable features: It compares funding states received from the Highway Trust Fund Highway Account with the dollars estimated to be have been collected in each state and contributed by each state’s highway users into the Highway Account in that same year. For example, for fiscal year 2008, it compares the highway funds states received in fiscal year 2008 with the amount collected and contributed in that fiscal year—data that did not become available until December 2009. Because of the 2-year lag (see fig. 3), fiscal year 2008 is the latest year for which these data are available. Thus, the final year of the original SAFETEA-LU authorization period, fiscal year 2009, is not included. Unlike other calculations used to apportion certain funds discussed further in this report, this analysis includes all funding provided to the states from the Highway Account, including (1) funds apportioned by formula, (2) High Priority Projects, and (3) other authorized programs, including safety program funding provided to states by other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2 for a breakdown of these funds). Using the above methodology, our analysis shows that states generally received more than their highway users contributed. However, other calculations, as described below, provide different results. Because there are different methods of calculating a rate of return, and the method used affects the results, confusion can result over whether a state is a donor or donee. A state can appear to be donor using one type of calculation and a donee using a different type. A second way to calculate rate of return is to apply the same dollar return calculation method, but use contribution data that are available at the time funds are apportioned to the states. This calculation method indicates that all states were donees. The data used to calculate the rate of return per dollar contributed differs from our preceding analysis in two ways: As shown in figure 3, it uses 2-year-old data on contributions for apportionments, due to the time lag between when the Treasury collects fuel and truck excise taxes and funds are apportioned. It uses a subset of Federal-Aid Highway programs including both programs apportioned to states by formula and High Priority Projects. However, it does not include other allocated highway programs or other funding states receive from programs other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2). Using this approach every state received more in funding from the Highway Account of the Highway Trust Fund than its users contributed for the SAFETEA-LU period. The rate of return ranged from a low of $1.04 per dollar for 16 states, including Texas, to a high of $5.26 per dollar for the District of Columbia, as shown in figure 5. This calculation results in states generally having a lower dollar rate of return than our calculation using same-year data (see fig. 4). A third calculation, based on a state’s “relative share”—the amount a state receives relative to other states instead of an absolute, dollar rate of return—results in both donor and donee states. Congress defined this method in SAFETEA-LU as the one FHWA uses for the calculating rates of return for the purpose of apportioning highway funding to the states. In order to calculate this rate of return, FHWA must determine what proportion of the total national contributions came from highway users in each state. The state’s share of contributions into the Highway Account of the Highway Trust Fund is then used to calculate a relative rate of return—how the proportion of each state’s contribution compares to the proportion of funds the state received. A comparison of the relative rate of return on states’ contributions showed 28 donor states, receiving less than 100 percent relative rate of return, and 23 states as donees receiving a more than a 100 percent relative rate of return (see fig. 6). States’ relative rates of return ranged from a low of 91.3 percent for 12 states to a high of 461 percent for the District of Columbia. Like the return per dollar analysis in figure 5, this calculation includes only formula funds and High Priority Projects allocated to states, and excludes other DOT authorized programs allocated to states (see fig. 2). The difference between a state’s absolute and relative rate of return can create confusion because the share calculation is sometimes mistakenly referred to as “cents on the dollar.” Using the relative share method of calculation will result in some states being “winners” and other states being “losers.” If one state receives a higher proportion of highway funds than its highway users contributed, another state must receive a lower proportion than it contributed. The only way to avoid this is for every state to get back exactly the same proportion that it contributed, which is impractical because estimated state contribution shares are not known until 2 years after the apportionments and allocations. Furthermore, because more funding has recently been apportioned and allocated from the Highway Account than is being contributed by highway users, a state can receive more than it contributes to the Highway Trust Fund Highway Account, making it a donee under its rate of return per dollar, but a donor under its relative share rate of return. California provides a useful example of this. From fiscal year 2005 through 2008, using same year contributions and funding across all Highway Trust Fund Highway Account allocations and apportionments, California received $1.16 for each dollar contributed. This analysis shows California as a donee state (see table 2). Alternatively, when calculating a dollar rate of return over the full SAFETEA-LU period (fiscal years 2005 through 2009) using state contribution estimates available at the time of apportionment (fiscal year 2003 through 2007 (as shown in fig. 3) and including only programs covered in rate-of-return adjustments, California remains a donee state, but received $1.04 for each dollar contributed. In contrast, using the relative share approach for the fiscal year 2005 through 2009 period, California received 91 percent of the share its highway users contributed in federal highway-related taxes, which would make it a donor state. A fourth method for calculating a state’s rate of return is possible, but not normally calculated by FHWA. It involves evaluating the relative share as described above, but using the same year comparison data. Again, because of the time lag required to estimate state highway user contributions to the Highway Account, such analysis is possible only 2 years after FHWA calculates apportionments for states. Our analysis using this approach results in yet another set of rate of return answers. For example, using available data from fiscal years 2005 to 2008, the relative rate of return for California becomes 97 percent, rather than 91 percent. When this analysis is applied to all states, a state may change its donor/donee status. For example, Minnesota, Nebraska, and Oklahoma appear both as donor and donee states, depending on the calculation method. This comparison of the relative rate of return on states’ contributions showed 27 states receiving less than 100 percent relative rate of return, and 24 states as receiving a more than a 100 percent relative rate of return. Table 3 shows the results for all four methods described and the wide variation of states’ rate of return based on the method used. Since 1982, Congress has attempted to address states’ concerns regarding the rate of return on highway users’ contribution to the Highway Trust Fund. In 2005, Congress enacted in SAFETEA-LU the Equity Bonus Program, designed to bring all states up to a “guaranteed” rate of return. The Equity Bonus is calculated from a subset of Federal-Aid Highway programs, which include 12 formula programs, plus High Priority Projects designated by Congress. In brief, since SAFETEA-LU, the Equity Bonus allocates sufficient funds to ensure that each state receives a minimum return of 90.5 percent for fiscal years 2005-2006, 91.5 percent for fiscal year 2007, and 92 percent for fiscal years 2008-2009 for the included programs. The Equity Bonus provides more funds to states than any other individual Federal-Aid Highway formula program. Over SAFETEA-LU’s initial 5-year authorization period, the Equity Bonus provided $44 billion to the states, while the second largest formula program, the Surface Transportation Program, provided $32.5 billion. Each year about $2.6 billion stay as Equity Bonus program funds and may be used for any purpose eligible under the Surface Transportation Program. Any additional Equity Bonus funds are added to the apportionments of the six “core” federal-aid highway formula programs: the Interstate Maintenance, National Highway System, Surface Transportation, Congestion Mitigation and Air Quality, Highway Bridge and the Highway Safety Improvement programs. States are frequently able to transfer a portion of their funds among the core programs, making funding of core programs less critical than it might be. States may qualify for Equity Bonus funding by meeting any of three criteria (see fig. 7). A state that meets more than one criterion receives funding under whichever provision provides it the greatest amount of funding. FHWA conducts Equity Bonus calculations annually. For the first criterion, the guaranteed relative rate of return, for fiscal year 2005 all states were guaranteed at least 90.5 percent of their share of estimated contributions. The guaranteed rate increased over time, rising to 92 percent in fiscal year 2009. The second criterion, the guaranteed increase over average annual Transportation Equity Act for the 21st Century (TEA-21) funding, also varied by year, rising from 117 percent in fiscal year 2005 to 121 percent for fiscal year 2009. The number of states qualifying under the first two provisions can vary from year to year. For the third criterion, a guarantee to “hold harmless” states that had certain qualifying characteristics at the time SAFETEA-LU was enacted, 27 states had at least one of these characteristics. A number of these states had more than one of these characteristics. Forty-seven states received Equity Bonus funding every year during the SAFETEA-LU period. However, the District of Columbia, Rhode Island, and Vermont each had at least 1 year where they did not receive Equity Bonus funding because they did not need it to reach the funding level specified under the three provisions. Maine was the only state that did not receive an Equity Bonus in any year. Half of all states received a significant increase in their overall Federal-Aid Highway Program–at least 25 percent over their core funding. Each state’s percent increase in its overall funding total for apportioned programs and High Priority Projects for fiscal years 2005 through 2009 due to Equity Bonus funding is shown in figure 8. Additional factors affect the relationship between contributions to the Highway Trust Fund and the funding states receive. These include (1) the infusion of significant amounts of general revenues into the Highway Trust Fund, (2) the challenge of factoring performance and accountability for results into transportation investment decisions, and (3) the long-term sustainability of existing mechanisms and the challenges associated with developing new approaches to funding the nation’s transportation system. First, the infusion of significant amounts of general revenues into the Highway Trust Fund Highway Account breaks the link between highway taxes and highway funding. The rate-of-return approach was designed to ensure that, consistent with the user pay system, wherein the costs of building and maintaining the system are borne by those who benefit, users receive a fair return on their investment to the extent possible. However, in fiscal year 2008 the Highway Trust Fund held insufficient amounts to sustain the authorized level of funding and, partly as a result, we placed it on our list of high-risk programs. To cover the shortfall, from fiscal years 2008 through 2010 Congress transferred a total of $34.5 billion in additional general revenues into the Highway Trust Fund, including $29.7 billion into the Highway Account. This means that, to a large extent, funding has shifted away from the contributions of highway users, breaking the link between highway taxes paid and benefits received by users. Furthermore, the infusion of a significant amount of general fund revenues complicates rate-of-return analysis because the current method of calculating contributions does not account for states’ general revenue contributions. For many states, the share of Highway Trust Fund contributions and general revenue contributions are different, therefore state-based contributions to all the funding in the Trust Fund are no longer clear. In addition, since March 2009, the American Recovery and Reinvestment Act of 2009 apportioned an additional $26.7 billion to the states for highways—a significant augmentation of federal highway spending that was funded with general revenues. Second, using rate of return as a major factor in determining federal highway funding levels is at odds with reexamining and restructuring federal surface transportation programs so that performance and accountability for results is factored into transportation investment decisions. As we have reported, for many surface transportation programs, goals are numerous and conflicting, and the federal role in achieving the goals is not clear. Many of these programs have no relationship to the performance of either the transportation system or of the grantees receiving federal funds and do not use the best tools and approaches to ensure effective investment decisions. Our previous work has outlined the need to create well defined goals based on identified areas of federal interest and a clearly defined federal role in relation to other levels of government. We have suggested that where the federal interest is less evident, state and local governments could assume more responsibility, and some functions could potentially be assumed by the states or other levels of government. Furthermore, incorporating performance and accountability for results into transportation funding decisions is critical to improving results. However the current approach presents challenges. The Federal-Aid Highway program, in particular, distributes funding through a complicated process in which the underlying data and factors are ultimately not meaningful because they are overridden by other provisions designed to yield a largely predetermined outcome—that of returning revenues to their state of origin. Moreover, once the funds are apportioned, states have considerable flexibility to reallocate them among highway and transit programs. As we have reported, this flexibility, coupled with a rate-of-return orientation, essentially means that the Federal-Aid Highway program functions, to some extent, as a cash transfer, general purpose grant program. This approach poses considerable challenges to introducing performance orientation and accountability for results into highway investment decisions. For three highway programs that were designed to meet national and regional transportation priorities, we have recommended that Congress consider a competitive, criteria-based process for distributing federal funds. Finally, using rate of return as a major factor in determining federal highway funding levels poses problems because funding the nation’s transportation system through taxes on motor vehicle fuels is likely to be unsustainable in the longer term. Receipts for the Highway Trust Fund derived from motor fuel taxes have declined in purchasing power, in part because the federal gasoline tax rate has not increased since 1993. In fiscal year 2008 (the last year for which data are available) total contributions to the Highway Account of the Highway Trust Fund decreased by more than $3.5 billion from fiscal year 2007, the first year of decrease during the SAFETEA-LU period. Over the long term, vehicles will become more fuel efficient and increasingly run on alternative fuels—for example, higher fuel economy standards were enacted in 2010. As such, fuel taxes may not be a sustainable source of transportation funding. Furthermore, transportation experts have noted that transportation policy needs to recognize emerging national and global challenges, such as reducing the nation’s dependence on imported fuel and minimizing the effect of transportation systems on the global climate. A fund that relies on increasing the use of motor fuels to remain solvent might not be compatible with the strategies that may be required to address these challenges. In the near future, policy discussions will need to consider what the most adequate and appropriate transportation financing systems will be and whether or not the current system continues to make sense. The National Surface Transportation Infrastructure Financing Commission—created by SAFETEA-LU to, among other things, explore alternative funding mechanisms for surface transportation—identified and evaluated numerous revenue sources for surface transportation programs in its February 2009 report including alternative approaches to the fuel tax, mileage-based user fees, and freight-related charges. The report also discussed using general revenues to finance transportation investment but concluded that it was a weak option in terms of economic efficiency and other factors, and recommended that new sources of revenue to support transportation be explored. These new sources of revenue may or may not lend themselves to using a rate of return approach. We provided a draft of this to DOT for review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or herrp@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO Staff who made major contributions to this report are listed in appendix II. To determine the amount of revenue states contributed to the Highway Trust Fund Highway Account compared with the funding they received during the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) period, we completed four analyses using Federal Highway Administration (FHWA) data. We met with FHWA and other DOT officials to discuss availability of data and appropriate methodologies. We used FHWA estimates of payments made into the Highway Account of the Highway Trust Fund, by state, and the actual total apportionments and allocations made from the fund, by state. This is sometimes referred to as a “dollar-in, dollar-out” analysis. Because the contribution data takes about 2 years for FHWA to compile, for our analyses we used data for 4 of 5 years of the SAFETEA-LU period, 2005 through 2008, as data for 2009 were not yet available. The source data are published annually in Highway Statistics and commonly referred to as table FE-221, titled “Comparison of Federal Highway Trust Fund Highway Account Receipts Attributable to the States and Federal-Aid Apportionments and Allocations from the Highway Account.” FHWA officials confirmed that it contains the best estimate of state contributions and also contains the total appropriations and allocations received by states from the Highway Account of the fund. We did not independently review FHWA’s process for estimating state highway users’ contributions into the Highway Trust Fund. However, we have reviewed this process in the past, and FHWA officials verified that they have made changes to the process as a result of that review. In addition, we did not attribute any prior balances in the Highway Trust Fund back to states of origin because these funds are not directly tied to any specific year or state. We only examined the fiscal year 2005 through 2008 period; other time periods could provide a different result. We performed alternative analyses to demonstrate that different methodologies provide different answers to the question of how the contributions of states’ highway users compared to the funding states received. Using the same data as described above, we performed a “relative share” analysis, which compared each state’s estimated proportion of the total contributions to the Highway Account to each state’s proportion of total Federal-Aid Highway funding. We also examined how states fared using FHWA’s approach for determining the Equity Bonus Program funding apportionments. We performed this analysis to show the outcomes for states based on the information available at the time the Equity Bonus program apportionments are made. The Equity Bonus program amounts are calculated using the statutory formulas for a subset of Federal-Aid Highway Programs. These include all programs apportioned by formula plus the allocated High Priority Projects. FHWA uses the most current contribution data available at the time it does its estimates. However, as explained above, the time lag for developing this data is about 2 years. Therefore, we applied the contribution data for 2003 through 2007 to the funding data for 2005 through 2009, the full SAFETEA- LU period. For these data, we (1) analyzed the total estimated contributions by state divided by the total funding received by state—the dollar-in, dollar out methodology—and (2) a comparison of the share of contributions to share of payments received for each state. We obtained data from the FHWA Office of Budget for the analysis of state dollar-in dollar-out outcomes, and state relative share data for the Equity Bonus Program. We completed our analyses across the total years of the SAFETEA-LU period, 2005 through 2009. We interviewed FHWA officials and obtained additional information from FHWA on the steps taken to ensure data reliability and determined the data were sufficiently reliable for the purposes of this report To determine the provisions in place during the SAFETEA-LU period to address rate-of-return issues across states and how they affected the highway funding states received, we reviewed SAFETEA-LU legislation, reports by the Congressional Research Service (CRS) and FHWA. We also spoke with FHWA and DOT officials to get their perspectives. We also conducted an analysis of FHWA data on the Equity Bonus Program provisions which were created explicitly to address the rate-of-return issues across states. Our analysis compared funding levels distributed to states via apportionment programs and High Priority Projects before and after Equity Bonus Program provisions were applied, and calculated the percentage increase each state received as a result of the Equity Bonus. To determine what additional factors affected the relationship between contributions to the Highway Trust Fund and the funding states receive, we reviewed GAO reports on federal surface transportation programs and the Highway Trust Fund, as well as CRS and FHWA reports, and the report of the National Surface Transportation Infrastructure Financing Commission. In addition, we reviewed FHWA data on the status of the Highway Account of the Highway Trust Fund. We also met with officials from Department of Transportation’s Office of Budget and Programs and FHWA to obtain their perspectives on the issue. Currently, FHWA estimates state-based contributions to the Highway Account of the Highway Trust Fund through a process that includes data collection, adjustment, verification, and final calculation of the states’ highway users’ contributions. FHWA first collects monthly motor fuel use data and related annual state tax data from state departments of revenue. FHWA then adjusts states’ data by applying its own models using federal and other data to establish data consistency among the states. FHWA provides feedback to the states on these adjustments and estimates through FHWA Division Offices. Finally, FHWA applies each state’s highway users’ estimated share of highway fuel usage to total taxes collected nationally to arrive at a state’s contribution to the Highway Trust Fund. We did not assess the effectiveness of FHWA’s process for estimating the amount of tax funds attributed to each state for this report. According to FHWA officials, data from state revenue agencies is more reliable and comprehensive than vehicle miles traveled data, so FHWA uses state tax information to calculate state contributions. States submit regular reports to FHWA, including a monthly report on motor-fuel consumption due 90 days after month’s end, and an annual motor-fuel tax receipts report due 90 days after calendar year’s end. States have a wide variety of fuel tracking and reporting methods, so FHWA adjusts the data to achieve uniformity. FHWA analyses and adjusts fuel usage data, such as off-highway use related to agriculture, construction, industrial, marine, rail, aviation and off-road recreational usage. It also analyzes and adjusts use data based on public-sector use, including federal civilian, and state, county, and municipal use. FHWA headquarters and Division Offices also work together to communicate with state departments of revenue during the attribution estimation process. According to FHWA officials, each year FHWA headquarters issues a memo prompting its Division Offices to have each state conduct a final review of the motor fuel gallons reported by their respective states. FHWA division offices also are required to assess their state’s motor fuel use and highway tax receipt process at least once every 3 years to determine if states are complying with FHWA guidance on motor fuel data collection. Once the data are finalized, FHWA applies each state’s estimated share of taxed highway fuel use to the total taxes collected to arrive at a state’s contribution in the following manner. Finalized estimations of gallons of fuel used on highways in two categories—gasoline and special fuels— allow FHWA to calculate each state’s share of the total on-highway fuel usage. The shares of fuel use for each state are applied to the total amount of taxes collected by the Department of the Treasury in each of the 10 categories of highway excise tax. The state’s gasoline share is applied to the gasoline and gasohol taxes, and the state’s special fuels share, which includes diesel fuel, is applied to all other taxes, including truck taxes. In addition to the contact named above, Steve Cohen (Assistant Director), Robert Ciszewski, Robert Dinkelmeyer, Brian Hartman, Bert Japikse, Josh Ormond, Amy Rosewarne, and Swati Thomas made key contributions to this report.
Federal funding for highways is provided to the states mostly through a series of grant programs known as the Federal-Aid Highway Program, administered by the Department of Transportation's (DOT) Federal Highway Administration (FHWA). In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized $197.5 billion for the Federal-Aid Highway Program for fiscal years 2005 through 2009. The program operates on a "user pay" system, wherein users contribute to the Highway Trust Fund through fuel taxes and other fees. The distribution of funding among the states has been a contentious issue. States that receive less than their highway users contribute are known as "donor" states and states that receive more than their highway users contribute are known as "donee" states. GAO was asked to examine for the SAFETEA-LU period (1) how contributions to the Highway Trust Fund compared with the funding states received, (2) what provisions were used to address rate-of-return issues across states, and (3) what additional factors affect the relationship between contributions to the Highway Trust Fund and the funding states receive. To conduct this review, GAO obtained and analyzed data from FHWA, reviewed FHWA and other reports, and interviewed FHWA and DOT officials. DOT reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. Since 2005, every state received as much or more funding for highway programs than they contributed to the Highway Account of the trust fund. This was possible because more funding was authorized and apportioned than was collected from the states and the fund needed to be augmented with general revenues. If the percentage of funds states contributed to the total is compared with the percentage of funds states received (i.e., relative share), then 28 states received a relatively lower share and 22 states received a relatively higher share than they contributed. Thus, depending on the method of calculation, the same state can appear to be either a donor or donee state. The Equity Bonus Program was used to address rate-of-return issues. It guaranteed a minimum return to states, providing them about $44 billion. Nearly all states received Equity Bonus funding and about half received a significant increase, at least 25 percent, over their core funding. The infusion of general revenues into the Highway Trust Fund affects the relationship between funding and contributions, as a significant amount of highway funding is no longer provided by highway users. Since fiscal year 2008, Congress has transferred nearly $30 billion of general revenues to address shortfalls in the highway program when more funding was authorized than collected. Using rate of return as a major factor in determining highway funding poses challenges to introducing a performance and accountability orientation into the highway program; rate-of-return calculations in effect override other considerations to yield a largely predetermined outcome--that of returning revenues to their state of origin. Because of these and other challenges, funding surface transportation programs remains on GAO's High-Risk list.
DHS has begun to take action to work with other agencies to identify facilities that are required to report their chemical holdings to DHS but may not have done so. The first step of the CFATS process is focused on identifying facilities that might be required to participate in the program. The CFATS rule was published in April 2007, and appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities for each. As a result of the CFATS rule, about 40,000 chemical facilities reported their chemical holdings and their quantities to DHS’s ISCD. In August 2013, we testified about the ammonium nitrate explosion at the chemical facility in West, Texas, in the context of our past CFATS work. Among other things, the hearing focused on whether the West, Texas, facility should have reported its holdings to ISCD given the amount of ammonium nitrate at the facility. During this hearing, the Director of the CFATS program remarked that throughout the existence of CFATS, DHS had undertaken and continued to support outreach and industry engagement to ensure that facilities comply with their reporting requirements. However, the Director stated that the CFATS regulated community is large and always changing and DHS relies on facilities to meet their reporting obligations under CFATS. At the same hearing, a representative of the American Chemistry Council testified that the West, Texas, facility could be considered an “outlier” chemical facility, that is, a facility that stores or distributes chemical-related products, but is not part of the established chemical industry. Preliminary findings of the CSB investigation of the West, Texas, incident showed that although certain federal agencies that regulate chemical facilities may have interacted with the facility, the ammonium nitrate at the West, Texas, facility was not covered by these programs. For example, according to the findings, the Environmental Protection Agency’s (EPA) Risk Management Program, which deals with the accidental release of hazardous substances, covers the accidental release of ammonia, but not ammonium nitrate. As a result, the facility’s consequence analysis considered only the possibility of an ammonia leak and not an explosion of ammonium nitrate. On August 1, 2013, the same day as the hearing, the President issued Executive Order 13650–Improving Chemical Facility Safety and Security, which was intended to improve chemical facility safety and security in coordination with owners and operators.established a Chemical Facility Safety and Security Working Group, composed of representatives from DHS; EPA; and the Departments of Justice, Agriculture, Labor, and Transportation, and directed the working group to identify ways to improve coordination with state and local partners; enhance federal agency coordination and information sharing; modernize policies, regulations and standards; and work with stakeholders to identify best practices. In February 2014, DHS officials told us that the working group has taken actions in the areas described in the executive order. For example, according to DHS officials, the working group has held listening sessions and webinars to increase stakeholder input, explored ways to share CFATS data with state and local partners to increase coordination, and launched a pilot program in New York and New Jersey aimed at increasing federal coordination and information sharing. DHS officials also said that the working group is exploring ways The executive order to better share information so that federal and state agencies can identify non-compliant chemical facilities and identify options to improve chemical facility risk management. This would include considering options to improve the safe and secure storage, handling, and sale of ammonium nitrate. DHS has also begun to take actions to enhance its ability to assess risk and prioritize facilities covered by the program. For the second step of the CFATS process, facilities that possess any of the 322 chemicals of interest at levels at or above the screening threshold quantity must first submit data to ISCD via an online tool called a Top- Screen. ISCD uses the data submitted in facilities’ Top Screens to make an assessment as to whether facilities are covered under the program. If DHS determines that they are covered by CFATS, facilities are to then submit data via another online tool, called a security vulnerability assessment, so that ISCD can further assess their risk and prioritize the covered facilities. ISCD uses a risk assessment approach to develop risk scores to assign chemical facilities to one of four final tiers. Facilities placed in one of these tiers (tier 1, 2, 3, or 4) are considered to be high risk, with tier 1 facilities considered to be the highest risk. The risk score is intended to be derived from estimates of consequence (the adverse effects of a successful attack), threat (the likelihood of an attack), and vulnerability (the likelihood of a successful attack, given an attempt). ISCD’s risk assessment approach is composed of three models, each based on a particular security issue: (1) release, (2) theft or diversion, and (3) sabotage, depending on the type of risk associated with the 322 chemicals. Once ISCD estimates a risk score based on these models, it assigns the facility to a final tier. Our prior work showed that the CFATS program was using an incomplete risk assessment approach to assign chemical facilities to a final tier. Specifically, in April 2013, we reported that the approach ISCD used to assess risk and make decisions to place facilities in final tiers did not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. For example, the risk assessment approach was based primarily on consequences arising from human casualties, but did not consider economic criticality consequences, as called for by the 2009 National Infrastructure Protection Plan (NIPP) and the CFATS regulation. In April 2013, we reported that ISCD officials told us that, at the inception of the CFATS program, they did not have the capability to collect or process all of the economic data needed to calculate the associated risks and they were not positioned to gather all of the data needed. They said that they collected basic economic data as part of the initial screening process; however, they would need to modify the current tool to collect more sufficient data. We also found that the risk assessment approach did not consider threat for approximately 90 percent of tiered facilities. Moreover, for the facilities that were tiered using threat considerations, ISCD was using 5-year-old data. We also found that ISCD’s risk assessment approach was not consistent with the NIPP because it did not consider vulnerability when developing risk scores. When assessing facility risk, ISCD’s risk assessment approach treated every facility as equally vulnerable to a terrorist attack regardless of location and on-site security. As a result, in April 2013 we recommended that ISCD enhance its risk assessment approach to incorporate all elements of risk and conduct a peer review after doing so. ISCD agreed with our recommendations, and in February 2014, ISCD officials told us that they were taking steps to address them and recommendations of a recently released Homeland Security Studies and Analysis Institute (HSSAI) report that examined the CFATS risk assessment model.among other things, that the CFATS risk assessment model inconsistently considers risks across different scenarios and that the model does not adequately treat facility vulnerability. Overall, HSSAI recommended that ISCD revise the current risk-tiering model and create a standing advisory committee—with membership drawn from government, expert communities, and stakeholder groups—to advise DHS on significant changes to the methodology. As with the findings in our report, HSSAI found, In February 2014, senior ISCD officials told us that they have developed an implementation plan that outlines how they plan to modify the risk assessment approach to better include all elements of risk while incorporating our findings and recommendations and those of HSSAI. Moreover, these officials stated that they have completed significant work with Sandia National Laboratory with the goal of including economic consequences into their risk tiering approach. They said that the final results of this effort to include economic consequences will be available in the summer of 2014. With regard to threat and vulnerability, ISCD officials said that they have been working with multiple DHS components and agencies, including the Transportation Security Administration and the Coast Guard, to see how they consider threat and vulnerability in their risk assessment models. ISCD officials said that they anticipate that the changes to the risk tiering approach should be completed within the next 12 to 18 months. We plan to verify this information as part of our recommendation follow-up process. DHS has begun to take action to lessen the time it takes to review site security plans which could help DHS reduce the backlog of plans awaiting review. For the third step of the CFATS process, ISCD is to review facility security plans and their procedures for securing these facilities. Under the CFATS rule, once a facility is assigned a final tier, it is to submit a site security plan or participate in an alternative security program in lieu of a site security plan. The security plan is to describe security measures to be taken and how such measures are to address applicable risk-based performance standards. After ISCD receives the site security plan, the plan is reviewed using teams of ISCD employees (i.e., physical, cyber, chemical, and policy specialists), contractors, and ISCD inspectors. If ISCD finds that the requirements are satisfied, ISCD issues a letter of authorization to the facility. After ISCD issues a letter of authorization to the facility, ISCD is to then inspect the facility to determine if the security measures implemented at the site comply with the facility’s authorized plan. If ISCD determines that the site security plan is in compliance with the CFATS regulation, ISCD approves the site security plan, and issues a letter of approval to the facility, and the facility is to implement the approved site security plan. In April 2013, we reported that it could take another 7 to 9 years before ISCD would be able to complete reviews of the approximately 3,120 plans in its queue at that time. As a result, we estimated that the CFATS regulatory regime, including compliance inspections (discussed in the next section), would likely not be implemented for 8 to 10 years. We also noted in April 2013 that ISCD had revised its process for reviewing facilities’ site security plans. ISCD officials stated that they viewed ISCD’s revised process to be an improvement because, among other things, teams of experts reviewed parts of the plans simultaneously rather than sequentially, as had occurred in the past. In April 2013, ISCD officials said that they were exploring ways to expedite the process, such as streamlining inspection requirements. In February 2014, ISCD officials told us that they are taking a number of actions intended to lessen the time it takes to complete reviews of remaining plans including the following: providing updated internal guidance to inspectors and ISCD updating the internal case management system; providing updated external guidance to facilities to help them better prepare their site security plans; conducting inspections using one or two inspectors at a time over the course of 1 day, rather than multiple inspectors over the course of several days; conducting pre-inspection calls to the facility to help resolve technical issues beforehand; creating and leveraging the use of corporate inspection documents (i.e., documents for companies that have over seven regulated facilities in the CFATS program); supporting the use of alternative security programs to help clear the backlog of security plans because, according to DHS officials, alternative security plans are easier for some facilities to prepare and use; and taking steps to streamline and revise some of the on-line data collection tools such as the site security plan to make the process faster. It is too soon to tell whether DHS’s actions will significantly reduce the amount of time needed to resolve the backlog of site security plans because these actions have not yet been fully implemented. In April 2013, we also reported that DHS had not finalized the personnel surety aspect of the CFATS program. The CFATS rule includes a risk- based performance standard for personnel surety, which is intended to provide assurance that facility employees and other individuals with access to the facility are properly vetted and cleared for access to the facility. In implementing this provision, we reported that DHS intended to (1) require facilities to perform background checks on and ensure appropriate credentials for facility personnel and, as appropriate, visitors with unescorted access to restricted areas or critical assets, and (2) check for terrorist ties by comparing certain employee information with its terrorist screening database. However, as of February 2014, DHS had not finalized its information collection request that defines how the personal surety aspect of the performance standards will be implemented. Thus, DHS is currently approving facility security plans conditionally whereby plans are not to be finally approved until the personnel surety aspect of the program is finalized. According to ISCD officials, once the personal surety performance standard is finalized, they plan to reexamine each conditionally approved plan. They would then make final approval as long as ISCD had assurance that the facility was in compliance with the personnel surety performance standard. As an interim step, in February 2014, DHS published a notice about its Information Collection Request (ICR) for personnel surety to gather information and comments prior to submitting the ICR to the Office of Management and Budget (OMB) for review and clearance. According to ISCD officials, it is unclear when the personnel surety aspect of the CFATS program will be finalized. A biometric access control system consists of technology that determines an individual’s identity by detecting and matching unique physical or behavioral characteristics, such as fingerprint or voice patterns, as a means of verifying personal identity. its usefulness with regard to the CFATS program. We recommended that DHS take steps to resolve these issues, including completing a security assessment that includes addressing internal controls weaknesses, among other things. The explanatory statement accompanying the Consolidated Appropriations Act, 2014, directed DHS to complete the recommended security assessment.February 2014, DHS had not yet done the assessment, and although DHS had taken some steps to conduct an internal control review, it had not corrected all the control deficiencies identified in our report. DHS reports that it has begun to perform compliance inspections at regulated facilities. The fourth step in the CFATS process is compliance inspections by which ISCD determines if facilities are employing the measures described in their site security plans. During the August 1, 2013, hearing on the West, Texas, explosion, the Director of the CFATS program stated that ISCD planned to begin conducting compliance inspections in September 2013 for facilities with approved site security plans. The Director further noted that the inspections would generally be conducted approximately 1 year after plan approval. According to ISCD, as of February 24, 2014, ISCD had conducted 12 compliance inspections. ISCD officials stated that they have considered using third-party non- governmental inspectors to conduct inspections but thus far do not have any plans to do so. In closing, we anticipate providing oversight over the issues outlined above and look forward to helping this and other committees of Congress continue to oversee the CFATS program and DHS’s progress in implementing this program. Currently, the explanatory statement accompanying the Consolidated and Further Continuing Appropriations Act, 2013, requires GAO to continue its ongoing effort to examine the extent to which DHS has made progress and encountered challenges in developing CFATS. Additionally, once the CFATS program begins performing and completing a sufficient number of compliance inspections, we are mandated review those inspections along with various aspects of them. Moreover, Ranking Member Thompson of the Committee on Homeland Security has requested that we examine among other things, DHS efforts to assess information on facilities that submit data, but that DHS ultimately decides are not to be covered by the program. Chairman Meehan, Ranking Member Clarke, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or CaldwellS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this and our prior work included John F. Mortin, Assistant Director; Jose Cardenas, Analyst-in-Charge; Chuck Bausell; Michele Fejfar; Jeff Jensen; Tracey King; Marvin McGill; Jessica Orr; Hugh Paquette, and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Facilities that produce, store, or use hazardous chemicals could be of interest to terrorists intent on using toxic chemicals to inflict mass casualties in the United States. As required by statute, DHS issued regulations establishing standards for the security of these facilities. DHS established the CFATS program to assess risk at facilities covered by the regulations and inspect them to ensure compliance. In February 2014, legislation was introduced related to several aspects of the program. This statement provides observations on DHS efforts related to the CFATS program. It is based on the results of previous GAO reports in July 2012 and April 2013, with selected updates conducted in February 2014. In conducting the earlier work, GAO reviewed DHS reports and plans on the program and interviewed DHS officials. In addition, GAO interviewed DHS officials to update information. In managing its Chemical Facility Anti-Terrorism Standards (CFATS) program, the Department of Homeland Security (DHS) has a number of efforts underway to identify facilities that are covered by the program, assess risk and prioritize facilities, review and approve facility security plans, and inspect facilities to ensure compliance with security regulations. Identifying facilities. DHS has begun to work with other agencies to identify facilities that should have reported their chemical holdings to CFATS, but may not have done so. DHS initially identified about 40,000 facilities by publishing a CFATS rule requiring that facilities with certain types of chemicals report the types and quantities of these chemicals. However, a chemical explosion in West, Texas last year demonstrated the risk posed by chemicals covered by CFATS. Subsequent to this incident, the President issued Executive Order 13650 which was intended to improve chemical facility safety and security in coordination with owners and operators. Under the executive order, a federal working group is sharing information to identify additional facilities that are to be regulated under CFATS, among other things. Assessing risk and prioritizing facilities. DHS has begun to enhance its ability to assess risks and prioritize facilities. DHS assessed the risks of facilities that reported their chemical holdings in order to determine which ones would be required to participate in the program and subsequently develop site security plans. GAO's April 2013 report found weaknesses in multiple aspects of the risk assessment and prioritization approach and made recommendations to review and improve this process. In February 2014, DHS officials told us they had begun to take action to revise the process for assessing risk and prioritizing facilities. Reviewing security plans . DHS has also begun to take action to speed up its reviews of facility security plans. Per the CFATS regulation, DHS was to review security plans and visit the facilities to make sure their security measures met the risk-based performance standards. GAO's April 2013 report found a 7- to 9-year backlog for these reviews and visits, and DHS has begun to take action to expedite these activities. As a separate matter, one of the performance standards—personnel surety, under which facilities are to perform background checks and ensure appropriate credentials for personnel and visitors as appropriate—is being developed. As of February 2014, DHS has reviewed and conditionally approved facility plans pending final development of the personal surety performance standard. Inspecting to verify compliance . In February 2014, DHS reported it had begun to perform inspections at facilities to ensure compliance with their site security plans. According to DHS, these inspections are to occur about 1 year after facility site security plan approval. Given the backlog in plan approvals, this process has started recently and GAO has not yet reviewed this aspect of the program. In a July 2012 report, GAO recommended that DHS measure its performance implementing actions to improve its management of CFATS. In an April 2013 report, GAO recommended that DHS enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review, and gather feedback on its outreach to facilities. DHS concurred with these recommendations and has taken actions or has actions underway to address them. GAO provided a draft of the updated information to DHS for review, and DHS confirmed its accuracy.
The FCS concept is part of a pervasive change to what the Army refers to as the Future Force. The Army is reorganizing its current forces into modular brigade combat teams, meaning troops can be deployed on different rotational cycles as a single team or as a cluster of teams. The Future Force is designed to transform the Army into a more rapidly deployable and responsive force and to enable the Army to move away from the large division-centric structure of the past. Each brigade combat team is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS-equipped brigade combat teams to provide significant warfighting capabilities to DOD’s overall joint military operations. The Army is implementing its transformation plans at a time when current U.S. ground forces are playing a critical role in the ongoing conflicts in Iraq and Afghanistan. The FCS family of weapons includes 18 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an information network. These vehicles, weapons, and equipment will comprise the majority of the equipment needed for a brigade combat team. The Army plans to buy 15 brigades worth of FCS equipment by 2025. We have frequently reported on the importance of using a solid, executable business case before committing resources to a new product development. In its simplest form, this is evidence that (1) the warfighter’s needs are valid and can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when needed. At the heart of a business case is a knowledge-based approach to product development that demonstrates high levels of knowledge before significant commitments are made. In essence, knowledge supplants risk over time. This building of knowledge can be described as three levels or knowledge points that should be attained over the course of a program: First, at program start, the customer’s needs should match the developer’s available resources—mature technologies, time, and funding. An indication of this match is the demonstrated maturity of the technologies needed to meet customer needs. Second, about midway through development, the product’s design should be stable and demonstrate that it is capable of meeting performance requirements. The critical design review is that point of time because it generally signifies when the program is ready to start building production- representative prototypes. Third, by the time of the production decision, the product must be shown to be producible within cost, schedule, and quality targets and have demonstrated its reliability and the design must demonstrate that it performs as needed through realistic system level testing. The three knowledge points are related, in that a delay in attaining one delays the points that follow. Thus, if the technologies needed to meet requirements are not mature, design and production maturity will be delayed. To develop the information on the Future Combat System program’s progress toward meeting established goals, the contribution of critical technologies and complementary systems, and the estimates of cost and affordability, we interviewed officials of the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Army G-8; the Office of the Under Secretary of Defense (Comptroller); the Secretary of Defense’s Cost Analysis Improvement Group; the Director of Operational Test and Evaluation; the Assistant Secretary of the Army (Acquisition, Logistics, and Technology); the Army’s Training and Doctrine Command; Surface Deployment and Distribution Command; the Program Manager for the Future Combat System (Brigade Combat Team); the Future Combat System Lead Systems Integrator; and other contractors. We reviewed, among other documents, the Future Combat System’s Operational Requirements Document, the Acquisition Strategy Report, the Baseline Cost Report, the Critical Technology Assessment and Technology Risk Mitigation Plans, and the Integrated Master Schedule. We attended and/or reviewed the results of the FCS System of Systems Functional Review, In- Process Reviews, Board of Directors Reviews, and multiple system demonstrations. In our assessment of the FCS, we used the knowledge- based acquisition practices drawn from our large body of past work as well as DOD’s acquisition policy and the experiences of other programs. We conducted the above in response to the National Defense Authorization Act of Fiscal Year 2006, which requires GAO to annually report on the product development phase of the FCS acquisition. We performed our review from June 2005 to March 2006 in accordance with generally accepted auditing standards. An improved business case for the FCS program is essential to help ensure that the program is successful in the long run. The FCS is unusual in that it is developing 18 systems and a network under a single program office and lead system integrator in the same amount of time that it would take to develop a single system. It also started development with less knowledge than called for by best practices and DOD policy. The Army has made significant progress defining FCS’s system of systems requirements, particularly when taking into account the daunting number of them involved—nearly 11,500 at this level. Yet system-level requirements are not yet stabilized and will continue to change, postponing the needed match between requirements and resources. Now, the Army and its contractors are working to complete the definition of system level requirements, and the challenge is in determining if those requirements are technically feasible and affordable. Army officials say it is almost certain that some FCS system-level requirements will have to be modified, reduced, or eliminated; the only uncertainty is by how much. We have previously reported that unstable requirements can lead to cost, schedule, and performance shortfalls. Once the Army gains a better understanding of the technical feasibility and affordability of the system- level requirements, trade-offs between the developer and the warfighter will have to be made, and the ripple effect of such trade-offs on key program goals will have to be reassessed. Army officials have told us that it will be 2008 before the program reaches the point which it should have reached before it started in May 2003 in terms of stable requirements. Development of concrete program requirements depends in large part on stable, fully mature technologies. Yet, according to the latest independent assessment, the Army has not fully matured any of the technologies critical to FCS’s success. Some of FCS’s critical technologies may not reach a high level of maturity until the final major phase of acquisition, the start of production. The Army considers a lower level of demonstration as acceptable maturity, but even against this standard, only about one-third of the technologies are mature. We have reported that going forward into product development without demonstrating mature technologies increases the risk of cost growth and schedule delays throughout the life of the program. The Army is also facing challenges with several of the complementary programs considered essential for meeting FCS’s requirements. Some are experiencing technology difficulties, and some have not been fully funded. These difficulties underscore the gap between requirements and available resources that must be closed if the FCS business case is to be executable. Technology readiness levels (TRL) are measures pioneered by the National Aeronautics and Space Administration and adopted by DOD to determine whether technologies were sufficiently mature to be incorporated into a weapon system. Our prior work has found TRLs to be a valuable decision-making tool because they can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. The maturity levels range from paper studies (level 1), to prototypes tested in a realistic environment (level 7), to an actual system proven in mission operations (level 9). Successful DOD programs have shown that critical technologies should be mature to at least a TRL 7 before the start of product development. In the case of the FCS program, the latest independent technology assessment shows that none of the critical technologies are at TRL 7, and only 18 of the 49 technologies currently rated have demonstrated TRL 6, defined as prototype demonstration in a relevant environment. None of the critical technologies may reach TRL 7 until the production decision in fiscal year 2012, according to Army officials. Projected dates for FCS technologies to reach TRL 6 have slipped significantly since the start of the program. In the 2003 technology assessment, 87 percent of FCS’s critical technologies were projected to be mature to a TRL 6 by 2005. When the program was looked at again in April 2005, 31 percent of the technologies were expected to mature to a TRL 6 by 2005, and all technologies are not expected to be mature to that level until 2009. The knowledge deficits for requirements and technologies have created enormous challenges for devising an acquisition strategy that can demonstrate the maturity of design and production processes. Several efforts within the FCS program are facing significant problems that may eventually involve reductions in promised capabilities and may lead to cost overruns and schedule delays. Even if requirements setting and technology maturity proceed without incident, FCS design and production maturity will still not be demonstrated until after the production decision is made. Production is the most expensive phase in which to resolve design or other problems. The Army’s acquisition strategy for FCS does not reflect a knowledge- based approach. Figure 1 shows how the Army’s strategy for acquiring FCS involves concurrent development, design reviews that occur late, and other issues that are out of alignment with the knowledge-based approach outlined in DOD policy. Ideally, the preliminary design review occurs at or near the start of product development. Doing so can help reveal key technical and engineering challenges and can help determine if a mismatch exists between what the customer wants and what the product developer can deliver. An early preliminary design review is intended to help stabilize cost, schedule, and performance expectations. The critical design review ideally occurs midway into the product development phase. The critical design review should confirm that the system design is stable enough to build production-representative prototypes for testing. The FCS acquisition schedule indicates several key issues: The program did not have the basic knowledge needed for program start in 2003. While the preliminary design review normally occurs at or near the start of product development, the Army has scheduled it in fiscal year 2008, about 5 years after the start of product development. Instead of the sequential development of knowledge, major elements of the program are being conducted concurrently. The critical design review is scheduled in fiscal year 2010, just 2 years after the scheduled preliminary review and the planned start of detailed design. The timing of the design reviews is indicative of how late knowledge will be attained in the program, assuming all goes according to plan. The critical design review is also scheduled just 2 years before the initial FCS low-rate production decision in fiscal year 2012, leaving little time for product demonstration and correction of any issues that are identified at that time. The FCS program is thus susceptible to late-cycle churn, which refers to the additional—and unanticipated—time, money, and effort that must be invested to overcome problems discovered late through testing. The total cost for the FCS program, now estimated at $160.7 billion (then year dollars), has climbed 76 percent from the Army’s first estimate. Because uncertainties remain regarding FCS’s requirements and the Army faces significant challenges in technology and design maturity, we believe the Army’s latest cost estimate still lacks a firm knowledge base. Furthermore, this latest estimate does not include complementary programs that are essential for FCS to perform as intended, or all of the necessary funding for FCS spin-outs. The Army has taken some steps to help manage the growing cost of FCS, including establishing cost ceilings or targets for development and production; however, program officials told us that setting cost limits may result in accepting lower capabilities. As FCS’s higher costs are recognized, it remains unclear whether the Army will have the ability to fully fund the planned annual procurement costs for the FCS current program of record. FCS affordability depends on the accuracy of the cost estimate, the overall level of development and procurement funding available to the Army, and the level of competing demands. At the start of product development, FCS program officials estimated that the program would require about $20 billion in then-year dollars for research, development, testing, and evaluation and about $72 billion to procure the FCS systems to equip 15 brigade combat teams. At that time, program officials could only derive the cost estimate on the basis of what they knew then—requirements were still undefined and technologies were immature. The total FCS program is now expected to cost $160.7 billion in then-year dollars, a 76 percent increase. Table 1 summarizes the growth of the FCS cost estimate. According to the Army, the current cost estimate is more realistic, better informed, and based on a more reasonable schedule. It accounts for the restructure of the FCS program and its increased scope, the 4-year extension to the product development schedule, the reintroduction of four systems that had been previously deferred, and the addition of a spin-out concept whereby mature FCS capabilities would be provided, as they become available, to current Army forces. It also reflects a rate of production reduced from an average of 2 brigade combat teams per year to an average of 1.5 brigades per year. Instead of completing all 15 brigades by 2020, the Army would complete production in 2025. This cost estimate has also benefited from progress made in defining system of systems requirements. Figure 2 compares the funding profiles for the original program and for the latest restructured program. The current funding profile is lower than the original through fiscal year 2013, but is substantially higher than the original after fiscal year 2013. It still calls for making substantial investments before key knowledge has been demonstrated. Stretching out FCS development by 4 years freed up about $9 billion in funding through fiscal year 2011 for allocation to other Army initiatives. Originally, FCS annual funding was not to exceed $10 billion in any one year. Now, the cost estimate is expected to exceed $10 billion in each of 9 years. While it is a more accurate reflection of program costs than the original estimate, the latest estimate is still based on a low level of knowledge about whether FCS will work as intended. The cost estimate has not been independently validated, as called for by DOD’s acquisition policy. The Cost Analysis Improvement Group will not release its updated independent estimate until spring 2006, after the planned Defense Acquisition Board review of the FCS program. The latest cost estimate does not include all the costs that will be needed to field FCS capabilities. For instance, Costs for the 52 essential complementary programs are separate, and some of those costs could be substantial. For example, the costs of the Joint Tactical Radio System Clusters 1 and 5 programs were expected to be about $32.6 billion (then-year dollars). Some complementary programs, such as the Mid-Range Munition and Javelin Block II, are currently not funded for their full development. These and other unfunded programs would have to compete for already tight funding. Procurement of the spin-outs from the FCS program to current Army forces is not yet entirely funded. Procuring the FCS items expected to be spun out to current forces is expected to cost about $19 billion, and the needed installation kits may add $4 billion. Adding these items brings the total required FCS investment to the $200 billion range. Through fiscal year 2006, the Army will have budgeted over $8 billion for FCS development. Through fiscal year 2008, when the preliminary design review is held, the amount budgeted for FCS will total over $15 billion. By the time the critical design review is held in 2010, about $22 billion will have been budgeted. By the time of the production decision in 2012, about $27 billion will have been budgeted. The affordability of the FCS program depends on several key assumptions. First, the program must proceed without exceeding its currently projected costs. Second, the Army’s annual procurement budget—not including funds specifically allocated for the modularity initiative—is expected to grow from between $11 billion to $12 billion in fiscal year 2006 to at least $20 billion by fiscal year 2011. The large annual procurement costs for FCS are expected to begin in fiscal year 2012, which is beyond the current Future Years Defense Plan period (fiscal years 2006-2011). FCS procurement will represent about 60-70 percent of Army procurement from fiscal years 2014 to 2022. This situation is typically called a funding bow wave. As it prepares the next Defense Plan, the Army will face the challenge of allocating sufficient funding to meet the increasing needs for FCS procurement in fiscal years 2012 and 2013. If all the needed funding cannot be identified, the Army will have to consider reducing the FCS procurement rate or delaying or reducing items to be spun out to current Army forces. However, reducing the FCS procurement rate would increase the FCS unit costs and extend the time needed to deploy FCS-equipped brigade combat teams. Given the risks facing the FCS program, the business arrangements made for carrying out the program will be critical to protecting the government’s interests. To manage the program, the Army is using a lead system integrator (LSI), Boeing. As LSI, Boeing carries greater responsibilities than a traditional prime contractor. The Army is in the process of finalizing a new Federal Acquisition Regulation (FAR)-based contract in response to concerns that the previous Other Transaction Agreement was not the best match for a program of FCS’s size and risks. This contract will establish the expectations, scope, deliverables, and incentives that will drive the development of the FCS. From the outset of the FCS program, the Army has employed a management approach that centers on the LSI. The Army did not believe it had the resources or flexibility to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. Although there is no complete consensus on the definition of LSI, generally, it is a prime contractor with increased responsibilities. These responsibilities may include greater involvement in requirements development, design and source selection of major system and subsystem subcontractors. The government has used the LSI approach on other programs that require system-of-systems integration. The FCS program started as a joint Defense Advanced Research Projects Agency and Army program in 2000. In 2002, the Army competitively selected Boeing as the LSI for the concept technology demonstration phase of FCS. The Army’s intent is to maintain the LSI for the remainder of FCS development. Boeing and the Army established a relationship to work in what has become known as a “one-team” management style with several first tier subcontractors to develop, manage, and execute all aspects of the FCS program. For example, Boeing’s role as LSI extends beyond that of a traditional prime contractor and includes some elements of a partner to the government in ensuring the design, development, and prototype implementation of the FCS network and family of systems. In this role, Boeing is responsible for (1) engineering a system of systems solution, (2) competitive selection of industry sources for development of the individual systems and subsystems, and (3) integrating and testing these systems to satisfy the requirements of the system of systems specifications. Boeing is also responsible for the actual development of two critical elements of the FCS information network—the System of Systems Common Operating Environment and the Warfighter-Machine Interface. The Army participates in program decisions such as make/buy and competitive selection decisions, and it may disapprove any action taken under these processes. The decision structure of the program is made up of several layers of Integrated Product Teams. These teams are co-chaired by Army and LSI representatives. Government personnel participate in each of the integrated product teams. This collaborative structure is intended to force decision making to the lowest level in the program. Decisions can be elevated to the program manager level, and ultimately the Army has final decision authority. The teams also include representation of the Army user community, whose extensive presence in the program is unprecedented. The advantages of using an LSI approach on a program like FCS include the ability of the contractor to know, understand, and integrate functions across the various FCS platforms. Thus, the LSI has the ability to facilitate movement of requirements and make trade-offs across platforms. This contrasts with past practices of focusing on each platform individually. However, the extent of contractor responsibility in so many aspects of the FCS program management process, including responsibility for making numerous cost and technical tradeoffs and for conducting at least some of the subcontractor source selections, is also a potential risk. As an example, many of the subcontractor source selections are for major weapon systems that, in other circumstances, would have been conducted by an Army evaluation team, an Army Contracting Officer and a senior- level Army source selection authority. These decisions, including procurement decisions for major weapons systems, are now being made by the LSI with Army involvement. This level of responsibility, as with other LSI responsibilities in the program management process, requires careful government oversight to ensure that the Army’s interests are adequately protected now and in the future. Thus far, the Army has been very involved in the management of the program and in overseeing the LSI. It is important that as the program proceeds, the Army continue to be vigilant about maintaining control of the program and that organizational conflicts of interest are avoided, such as can arise when the LSI is also a supplier. As discussed in the next section, the Army intends the new contract to provide additional protection against potential conflicts. The Army and Boeing entered into a contractual instrument called an Other Transaction Agreement (OTA). The purpose of the OTA was to encourage innovation and to use its wide latitude in tailoring business, organizational, and technical relationships to achieve the program goals. The original OTA was modified in May 2003 and fully finalized in December 2003 for the Systems Development and Demonstration phase of the FCS program. The latest major modification to the OTA, to implement the 2004 program restructuring, was finalized in March 2005. As you know, questions have been raised about the appropriateness of the Army’s use of an OTA for a program as large and risky as FCS. The Airland Subcommittee held a hearing in March 2005 which addressed this among other issues. In particular, concern has been raised about the protection of the government’s interests under the OTA arrangement and the Army’s choice to not include standard FAR clauses in the OTA. In April 2005, the OTA was modified by the Army to incorporate the procurement integrity, Truth in Negotiations, and Cost Accounting Standards clauses. In April 2005, the Secretary of the Army decided that the Army should convert the OTA to a FAR-based contract. A request for proposals was issued by the Army on August 15, 2005. An interim letter contract was issued on September 23, 2005. The Systems Development and Demonstration work through September 2005 will be accounted for under the OTA and all future work under the FAR-based contract. Boeing/SAIC and all of the FCS subcontractors were to submit a new certifiable proposal for the remainder of Systems Development and Demonstration and that will be the subject of negotiations with the Army. The Army expects the content of the program—its statement of work—will remain the same and they do not expect the cost, schedule, and performance of the overall Systems Development and Demonstration effort to change materially. The target date for completion of the finalized FAR contract is March 28, 2006. In the coming months, we will be taking a close look at the new contract as part of our continuing work on FCS that is now mandated by the Defense Authorization Act for Fiscal Year 2006. The FAR-based contract is expected to include standard FAR clauses, including the Truth in Negotiations and Cost Accounting Standards clauses. The letter contract includes Organizational Conflict of Interest clauses whereby Boeing and SAIC can not compete for additional FCS subcontracts. Also, other current subcontractors can compete for work only if they do not prepare the request for proposals or participate in the source selection process. The last major revision of the OTA in March 2005 had a total value of approximately $21 billion. Through September 2005 the Army and LSI estimate that about $3.3 billion will be chargeable to the OTA. The FAR based contract will cover all activity after September 2005 and is expected to have a value of about $17.4 billion. Both the OTA and the FAR-based contract will be cost plus fixed fee contracts with additional incentive fees. According to the Army, the fee arrangement is designed to address the unique relationship between the Army and the LSI and to acknowledge their “shared destiny” by providing strategic incentives for the LSI to prove out technologies, integrate systems, and move the program forward to production, at an affordable cost and on schedule. In the OTA, the annual fixed fee was set at 10 percent of estimated cost and the incentive fee available was 5 percent. The Army plans to change the fee structure for the FCS program in the new contract. The request for proposals for the new contract proposed a 7 percent fixed fee and an 8 percent incentive fee. The OTA established 10 distinct events where LSI performance will be evaluated against pre- determined performance, cost, and schedule criteria. (Those events are expected to be retained in the FAR contract.) One event has already occurred—the System of Systems Functional Requirements Review was held in August 2005. The next event is called the Capabilities Maturity Review and it is expected to occur in June or July 2006. As the details are worked out, it is important that the new contract encourage meaningful demonstrations of knowledge and to preserve the government’s ability to act on knowledge should the program progress differently than planned. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or members of the Subcommittee may have. For future questions about this statement, please contact me at (202) 512- 4841. Individuals making key contributions to this statement include Robert L. Ackley, Lily J. Chin, Noah B. Bleicher, Marcus C. Ferguson, William R. Graveline, Guisseli Reyes, Michael J. Hesse, John P. Swain, Robert S. Swierczek, and Carrie R. Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Future Combat System (FCS) is a networked family of weapons and other systems in the forefront of efforts by the Army to become a lighter, more agile, and more capable combat force. When considering complementary programs, projected investment costs for FCS are estimated to be on the order of $200 billion. FCS's cost is of concern given that developing and producing new weapon systems is among the largest investments the government makes, and FCS adds significantly to that total. Over the last five years, the Department of Defense (DOD) doubled its planned investments in such systems from $700 billion in 2001 to $1.4 trillion in 2006. At the same time, research and development costs on new weapons continue to grow on the order of 30 to 40 percent. FCS will be competing for significant funds at a time when Federal fiscal imbalances are exerting great pressures on discretionary spending. In the absence of more money being available, FCS and other programs must be executable within projected resources. Today, I would like to discuss (1) the business case needed for FCS to be successful and (2) related business arrangements that support that case. There are a number of compelling aspects of the FCS program, and it is hard to argue with the program's goals. However, the elements of a sound business case for such an acquisition program--firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate and sufficient funding--are not yet present. FCS began product development prematurely in 2003. Since then, the Army has made several changes to improve its approach for acquiring FCS. Yet, today, the program remains a long way from having the level of knowledge it should have had before starting product development. FCS has all the markers for risks that would be difficult to accept for any single system, much less a complex, multi-system effort. These challenges are even more daunting in the case of FCS not only because there are so many of them but because FCS represents a new concept of operations that is predicated on technological breakthroughs. Thus, technical problems, which accompany immaturity, not only pose traditional risks to cost, schedule, and performance; they pose risks to the new fighting concepts envisioned by the Army. Many decisions can be anticipated that will involve trade-offs the Government will make in the program. Facts of life, like technologies not working out, reductions in available funds, and changes in performance parameters, must be anticipated. It is important, therefore, that the business arrangements for carrying out the FCS program--primarily in the nature of the development contract and in the lead system integrator (LSI) approach-- preserve the government's ability to adjust course as dictated by these facts of life. At this point, the $8 billion to be spent on the program through fiscal year 2006 is a small portion of the $200 billion total. DOD needs to guard against letting the buildup in investment limit its decision making flexibility as essential knowledge regarding FCS becomes available. As the details of the Army's new FCS contract are worked out and its relationship with the LSI evolves, it will be important to ensure that the basis for making additional funding commitments is transparent. Accordingly, markers for gauging knowledge must be clear, incentives must be aligned with demonstrating such knowledge, and provisions must be made for the Army to change course if the program progresses differently than planned.
In addition to the 50-50 requirement in 10 U.S.C. § 2466, the following provisions directly affect the reporting of workload funding allocations to the public and private sectors: Section 2460(a) of Title 10 defines “depot-level maintenance and repair” as material maintenance or repair requiring the overhaul, upgrading, or rebuilding of parts, assemblies, or subassemblies and the testing and reclamation of equipment as necessary, regardless of the source of funds for the maintenance or repair, or the location at which the maintenance or repair is performed. This term also includes: (1) all aspects of software maintenance classified by DOD as of July 1, 1995 as depot-level maintenance and repair; and (2) interim contractor support or contractor logistics support (or any similar contractor support) to the extent that such support is for the performance of services described in the preceding sentence. Section 2460(b)(1) excludes from the definition of depot maintenance the nuclear refueling of an aircraft carrier, and the procurement of major modifications or upgrades of weapon systems that are designed to improve program performance, although a major upgrade program covered by this exception could continue to be performed by private- or public-sector entities. Section 2460(b)(2) also excludes from the definition of depot-level maintenance the procurement of parts for safety modifications, although the term does include the installation of parts for safety modifications. Depot maintenance funding involving certain public-private partnerships is exempt from the 50 percent limitation. Section 2474(f) of Title 10 provides that amounts expended for the performance of depot-level maintenance and repair by nonfederal government personnel at Centers of Industrial and Technical Excellence under any contract entered into during fiscal years 2003 through 2009 shall not be counted when applying the 50 percent limitation in Section 2466(a) if the personnel are provided by entities outside DOD pursuant to a public-private partnership. In its annual 50-50 report to Congress, DOD identifies this funding as a separate category called “exempt.” Section 2466(b) allows the Secretary of Defense to waive the 50 percent limitation if he determines the waiver is necessary for national security, and he submits the notification of waiver together with the reasons for the waiver to Congress. Waivers were previously submitted for the Air Force for fiscal years 2000 and 2001. OSD issues guidance to the military departments for reporting public- private workload funding allocations. The guidance’s definition of “depot level maintenance and repair” is consistent with the definition of “depot- level maintenance and repair” in 10 U.S.C. § 2460. The military services have also issued internal instructions to manage the data collection and reporting process, tailored to their individual organizations and operating environments. Although DOD reported that the military departments complied with the 50-50 requirement for fiscal year 2005, we could not validate compliance because of systemic weaknesses in DOD’s financial management systems and persistent deficiencies in the processes used to collect and report 50- 50 data. DOD’s report provides an approximation of the depot maintenance funding allocation between the public and private sectors but contains some inaccuracies. Our current review showed that 50-50 funding data were not being consistently reported because some maintenance depots were reporting expenditures rather than obligations as directed by OSD guidance. We also found that amounts associated with interservice depot maintenance work and certain contract agreements between depots and private contractors may not accurately reflect the distribution reported for private- and public-sector funds because visibility over the allocation of these funds is limited. In addition, we found several other errors that resulted in inaccuracies in reported 50-50 data for the Navy and Army. DOD took some actions this year to improve 50-50 reporting. However, our work over the last several years has identified a number of persistent deficiencies, such as inadequate management attention and review, which have affected the quality of reported 50-50 data. While DOD took actions to improve 50-50 reporting this year, DOD has not implemented recommendations we made last year to address these deficiencies. In DOD’s April 2006 report to Congress on funding allocations for depot maintenance, all three military departments reported that their private- sector depot maintenance allocation was below the 50 percent limitation for fiscal year 2005. However, we found that the reported data contained inaccuracies. Table 1 shows the reported allocation between the public and private sectors and the exempted workload funding. On the basis of our evaluation of selected 50-50 data, DOD’s April 2006 report provides an approximation of depot maintenance funding allocations between the public and private sectors for fiscal year 2005. However, we identified errors in reported workload funding data. The net effects of correcting the data inaccuracies we identified would increase the Army’s private-sector funding allocation from 49.4 percent to 50 percent. Identified errors in the Army’s data resulted in a total decrease in public-sector funding of $5.9 million and a total increase in private-sector funding of $68.1 million. Appendix II provides additional information on these adjustments. We could not quantify the errors that we identified for the Air Force regarding direct sales agreements. We continue to identify areas that continue to be excluded from the Navy’s 50-50 reporting. While we found an error in the Marine Corps data, correcting this inaccuracy would not result in changes to the Department of the Navy’s funding allocation percentages. We did not conduct a review of all reported 50-50 data; therefore, there may be additional errors, omissions, and inconsistencies that were not identified. Depot maintenance funding data for fiscal year 2005 were not being consistently reported because some maintenance depots were reporting expenditures, rather than obligations as directed by OSD guidance. The reporting of expenditures instead of obligations by some depots presents an inaccurate picture of depot maintenance allocations since the amounts may differ. For the most part, the allocation percentages for public funds represent obligation amounts obtained from the military department’s financial accounting systems. However, in reporting the amount of depot maintenance funds allocated to the private sector, some reporting organizations used expenditures rather than obligations as required by OSD guidance. For example, three depots we visited reported their subcontracted depot-level maintenance work as expenditures rather than obligations. Reasons given by depot officials for reporting expenditures rather than obligations include the following: (1) the workload against obligated funds may not have been fully performed during the fiscal year, and therefore they believed reporting expenditures was a better reflection of the actual workload; (2) they did not know that obligations were to be reported instead of expenditures; and (3) many work orders can be associated with a multiyear contract, so they believed that reporting expenditures would be a better representation of the costs associated with multiyear contracts for the fiscal year in question. Accurately reporting carryover work is a problem when the services’ data contain both expenditures and obligations. Carryover is work that a depot may “carry over” from one fiscal year to another to ensure a smooth flow of work during the transition between fiscal years. This means that while the funds are obligated in one fiscal year, a certain portion may not be expended until the next fiscal year. When expenditures rather than obligations are reported, we found that the carryover work that is performed in the following year may not be included in either year’s 50-50 report. For example, an Army depot official provided us with an estimate of almost $1.5 million that was expended in fiscal year 2006 on a fiscal year 2005 contract obligation. The official stated that this portion of the obligation was not reported in fiscal year 2005 because it was not yet expended, and it would not be reported in fiscal year 2006 because it was expended on a fiscal year 2005 obligation. As a result, the private portion of the service’s depot maintenance funds was underreported in the year of the obligation, while the public portion was overreported. Until depot maintenance funding obligations are consistently reported, rather than a combination of expenditures and obligations, inaccurate reporting of the allocation of depot maintenance funding between the public and private sectors will continue. Because DOD has limited visibility over the allocation of private- and public-sector funds in some interservice agreements and direct sales agreements, inaccurate reporting of the depot maintenance workload allocation may result. Interservice workload agreements refer to work that is performed by one component for another. OSD guidance requires that the military departments establish measures to ensure correct accounting of interservice workloads; however the allocation of these funds may not always be accurately reported. We found instances where a military service awarded public depot maintenance work to another military service, which then contracted out a portion of that workload to the private sector. The military service awarding the work, as principal owner of the funds, inaccurately reported this as public workload because it had not inquired whether all the awarded work was performed at the public depot. For example, we identified approximately $172,000 of private- sector work that may have been inaccurately reported as public-sector work because the principal owner of the funds did not follow up to determine whether all of the work was performed by the public depot. While we were unable to fully evaluate the extent of inaccurate reporting associated with interservice agreements, until the military departments establish sufficient measures to accurately account for and report their distribution of depot maintenance workload, the 50-50 data reported by DOD may continue to be inaccurate. The limited visibility over direct sales agreements is another reason why the depot maintenance workload allocation may be inaccurately reported to Congress. A direct sales agreement involves private vendors contracting back to a DOD maintenance facility for labor to be performed by DOD employees. OSD guidance requires that sales of articles and services by DOD maintenance depots to entities outside of DOD, when work is accomplished by DOD employees, shall be reported as public-sector work. However, we found that the reporting of the distribution of private- and public-sector workload for direct sales agreements may not be accurate. With a direct sales agreement, there is no requirement for the private vendor to identify and break out the contract costs, such as materials and other factors of production, and allocate them to expenses performed by the private vendor or the public depot. We found the use of direct sales agreements by the Air Force may have resulted in an overstatement of private-sector funds, with a corresponding understatement of public- sector funds. In addition, we found similar instances in the Army where work performed by the public sector under a direct sales agreement with a private vendor may have been misreported as being performed by the private sector. Although we were unable to fully evaluate the extent to which costs associated with these types of contract agreements were misreported, until private vendors break out direct sales agreement costs by the private and public sectors, DOD’s reporting of 50-50 funding allocation may remain inaccurate. We identified several other errors that resulted in inaccurate reported 50- 50 data for the Navy and Army. As we reported in previous years, we identified two areas that continue to be excluded from the Navy’s 50-50 reporting. First, the Navy did not report any depot maintenance work on aircraft carriers performed while nuclear refueling. Navy officials cited the exclusion of nuclear refueling in 10 U.S.C. § 2460(b)(1) and guidance from the General Counsel’s office in the Department of the Navy as reasons for not including $115 million in depot maintenance work performed on aircraft carriers while nuclear refueling. However, we continue to believe that depot repairs not directly associated with the task of nuclear refueling should be reported. Second, the Navy, as in prior years, continues to inconsistently report ship-inactivation activities related to the servicing and preservation of systems and equipment before ships are placed in storage or in an inactive status. The Navy did not report $14.4 million of private-sector allocations for inactivation work on nonnuclear ships, even though it reported inactivation activities on nuclear ships. The Navy contends that the work for nuclear ship inactivation is complex while the work for nonnuclear ships is not. We continue to maintain that all such depot-level work should be reported, since the statute and implementing guidance do not make a distinction based on complexity. In addition, our review of the Marine Corps data found that it underreported the private- sector total and overreported the public-sector total by about $1.5 million. This amount was for depot-level maintenance that was performed in a public depot by contractor personnel, which was misreported as public sector rather than private sector. We also identified several data inaccuracies in the Army’s 50-50 data. For example, one Army depot failed to include approximately $31 million of private contract work it had outsourced for depot maintenance in its 50-50 report. An Army official said that they had not known that this type of contract work should be included in 50-50 reporting, but they now plan to include it in future submissions. Our review also determined that several Army omissions, totaling approximately $53 million, were due to misinterpretation of the guidance regarding modifications and remanufacturing. The OSD guidance provides information about what to include and not to include in reporting depot maintenance with regard to upgrades, modifications, and remanufacturing. An Army official acknowledged that there has been confusion over what to report for 50-50 depot maintenance and stated the Army is in the draft stages of updating the Army’s Depot Maintenance Workload Distribution Reporting Procedures. In addition, the Army’s 50-50 data contained errors totaling approximately $4 million due to changes in program costs. Finally, our review of the Army’s data found miscellaneous errors, including one instance of double counting and the transposition of numbers in some entries. During our review we noted actions taken by OSD and the military services that, while not fully implemented, provided some improvement in the 50-50 reporting process. For example, OSD, in its 50-50 guidance, added a new requirement that the military services include variance analyses in their submissions of 50-50 data. The services performed variance analyses; however, these were at a very high level and provided little detail on how the fiscal year 2005 allocations differed from the prior year’s data. OSD guidance also included a new requirement that the services maintain records and reports for 50-50 data for at least 2 years, although we did find two instances where reporting locations could not provide backup documentation for their 50-50 data. In addition, as in previous years, OSD instructed the services to use a third-party reviewer, such as a service audit agency, to validate their data prior to submission. However, due to time constraints, each service audit agency performed only a limited review of the service’s data. For example, the Air Force directed its audit service to perform a limited review that focused on two issues. Additionally, each service headquarters continued to provide some form of training for its 50-50 reporting activities, although no service required attendance by all individuals involved in 50-50 data gathering and reporting. Guidance issued by OSD emphasized, but did not require, training for individuals involved in the 50-50 process. In one instance, an official who was responsible for querying the 50-50 information from the service’s data systems was unaware that any training was ever offered for 50-50 reporting. Our work over the last several years has identified a number of persistent deficiencies, such as inadequate management attention and review, which have affected the quality of reported 50-50 data. DOD has not implemented recommendations we made last year to address these deficiencies. In prior years’ reports, we have identified problems in 50-50 data accuracy attributable to deficiencies in management attention, controls, and oversight; documentation of procedures and retention of records; independent validation of data; training for staff involved in the 50-50 process; and guidance. DOD has taken steps over the years to improve 50- 50 reporting in response to our recommendations, but we have found that some deficiencies have persisted, including inadequate management attention and review, limited review and validation of data by independent third parties, and inadequate staff training. In our November 2005 report, we concluded that the recurring nature of deficiencies in 50-50 reporting indicates a management control weakness that DOD should disclose in its annual performance and accountability report to Congress. By doing so, DOD would increase the level of management attention and help focus improvement efforts so that the data provided to Congress are accurate and complete. DOD partially concurred with this recommendation, stating that systemic changes to the 50-50 reporting process had already been made in response to previous recommendations. DOD did not disclose 50- 50 reporting as a management control weakness in its most recent performance and accountability report. An OSD official responsible for developing the annual 50-50 report to Congress noted that completion of the department’s Enterprise Transition Plan would result in more accurate 50-50 reporting. As we have previously reported, DOD’s April 2006 report satisfies the annual mandate as required by 10 U.S.C. § 2466(d). In our November 2005 report, we stated that DOD could enhance the usefulness of its report for congressional oversight by providing additional information. For example, we recommended that DOD add information such as variance analyses that identify significant changes from the prior year’s report and the reasons for these variances, longer term trend analyses, an explanation of methodologies used to estimate workload allocation projections for the current and ensuing fiscal years, and plans to ensure continued compliance with the 50-50 requirement, including decisions on new weapon systems maintenance workload sourcing that could be made to support remaining within the 50 percent threshold. DOD partially concurred with this recommendation and stated that producing the types of information we suggested would require a massive undertaking and may be of limited value. We disagreed and, on the basis of DOD’s response, added a matter for congressional consideration suggesting that Congress require the Secretary of Defense to enhance the department’s annual 50-50 report as stated in our recommendations. In the April 2006 report, DOD did not make changes consistent with our recommendations, nor has Congress acted. DOD’s reported projections for fiscal years 2006 through 2007 do not represent reasonable estimates of public- and private-sector depot maintenance funding allocations, in part because some errors in DOD’s fiscal year 2005 data are carried into the projected years. As shown in table 2, the Army and the Navy projected that their private-sector depot maintenance allocations will remain below the 50 percent limitation for fiscal years 2006 and 2007. The Air Force projected that it will remain below the limitation for fiscal year 2006, but will exceed the limitation for fiscal year 2007. Errors similar to those we identified in fiscal year 2005 reported data could affect these projections, as the Air Force is moving closer to the threshold for private-sector funding in fiscal year 2006 (48.4 percent) and beyond the threshold in fiscal year 2007 (50.2 percent). If the adjustments we made to the Army’s fiscal year 2005 data—increasing the private-sector percentage by about 0.6 percentage points—are carried forward into fiscal year 2007 projections, it could cause the Army to come within 2 percent of the 50 percent limitation on contracting for depot-level maintenance and repair. When spending projections reflect data within 2 percent of the 50 percent limitation in a fiscal year, OSD guidance directs the components to submit a plan that identifies actions to be taken to ensure continued compliance. This plan shall include identification of decisions on candidate maintenance workload sourcing that could be made to support remaining within compliance with the 50 percent limitation. In addition, we found an error of approximately $1.6 million in the Army’s fiscal year 2006 projections, which further limits the accuracy of reported projections. Furthermore, DOD’s projected fiscal year 2006 and fiscal year 2007 allocations are based on the President’s budget numbers and often did not include supplemental funds, which can change the percentage allocations. However, in the case of some Air Force depot projections, supplemental funds are included in the projections if the amounts are already known. These limitations affect the reasonableness of the data reported as projections of future funding allocations. While the Army and Navy project compliance with the 50-50 requirement through fiscal year 2007, the Air Force’s fiscal year 2006 projections are within 2 percent of the 50 percent limitation and its fiscal year 2007 projections exceed the 50 percent limitation by 0.2 percent. To avoid breaching the 50 percent threshold, the Air Force is implementing a plan to ensure compliance in fiscal years 2007 through 2010. Under this plan, the Air Force is identifying and evaluating candidate weapon system programs for shifting maintenance workload from the private sector to the public sector. The Air Force has committed resources and approved shifting some maintenance associated with the F-100 engine beginning in fiscal year 2006. The Air Force plan shows that a total workload of $68 million associated with the F-100 engine could be shifted to the public sector, enabling the Air Force to achieve compliance with the 50-50 requirement in fiscal year 2007. The Air Force is also evaluating workload associated with the KC-135 aircraft, the C-17 aircraft, the B-2 aircraft, the F-119 engine, and the F-117 engine that may be shifted to the public sector. The errors we identified in DOD’s April 2006 50-50 report—while not extensive—are indicative of the long-standing problems DOD has encountered in providing accurate depot maintenance funding allocation data to Congress. We have previously observed that the usefulness of the annual 50-50 report to Congress is limited because of data reliability concerns. Our prior reports identified data inaccuracies and recommended corrective actions aimed at addressing deficiencies that limited the accuracy of 50-50 reporting. In addition, we have recommended actions that Congress could take to improve the reliability and usefulness of DOD’s annual report. Our current review shows that while DOD has taken some additional actions to improve the quality of reported data for fiscal year 2005, it has not fully addressed the persistent deficiencies that have limited 50-50 data accuracy in the past. DOD’s report presented an inaccurate measure of the balance of funding between the public and private sectors due to inconsistencies in reporting expenditures rather than obligations, and inaccurate distribution of reporting of allocations from interservice and direct sales agreements. Without consistent reporting of depot maintenance funding obligations, rather than a combination of expenditures and obligations, inaccurate reporting of the funding allocation between the public and private sectors will continue. Moreover, without accurate reporting of the allocation of depot maintenance workload performed by the private and public sectors under interservice and direct sales agreements, the 50-50 data reported by DOD will continue to be inaccurate. To improve the consistency and accuracy of depot maintenance funding allocation data in DOD’s annual 50-50 report to Congress, we recommend that the Secretary of Defense take the following two actions: Direct the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps to follow OSD guidance and report funding obligations rather than expenditures. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in conjunction with the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps, to establish measures to ensure proper accounting of the allocation of interservice workloads between the public and private sectors. In commenting on a draft of this report, DOD concurred with our recommendations. Regarding our recommendation that the military services follow guidance and report funding obligations rather than expenditures, DOD stated that it will be specific in its guidance on 50-50 reporting and require organizations to report obligations rather than expenditures. Also, DOD stated that Army guidance and training will address our findings. Consistent with our recommendation, we believe that the Air Force, Navy, and Marine Corps also should take appropriate steps to ensure that obligations are reported. Regarding our recommendation that measures be established to ensure proper accounting of the allocation of interservice workloads, DOD said that its guidance will require component audit agencies to specifically validate interservice data prior to submitting the 50-50 report to the department. Validation of interservice data would meet the intent of our recommendation. DOD also stated that it did not agree with our adjustments to the work accomplished during the nuclear refueling of aircraft carriers and for inactivation work on nonnuclear ships. DOD stated that all costs during nuclear aircraft carrier refueling are properly excluded and conventional ship inactivation workload is not considered depot-level maintenance. We have had a long-standing disagreement with DOD on including funding for these two areas in its 50-50 report. For the past several years we have maintained that DOD should include these funds, while DOD has disagreed. Our reasons for including these adjustments are discussed in this report. DOD’s written comments are reprinted in appendix III. DOD also provided technical comments which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine whether the military departments provided accurate data in reporting depot maintenance funding allocations and whether they met the 50-50 requirement for fiscal year 2005, we reviewed military services’ procedures and internal management controls for collecting and reporting their depot maintenance allocations. We discussed with key officials the process used to identify and report depot maintenance workload allocation between the public and private sectors. We selected a nonprobability sample of reported 50-50 obligations totaling $2.7 billion of the reported $26.4 billion reported in the Department of Defense’s (DOD) report to Congress on depot maintenance funding allocation. We based our sample on previously identified areas of concern, varying program amounts, and selected locations for our site visits. We also contacted service audit agencies and third-party officials at service headquarters to discuss their verification review of the fiscal year 2005 50-50 obligation data. We did not conduct a review of all reported 50-50 data; therefore, there may be additional errors, omissions, and inconsistencies that were not identified. Because we used a nonprobability sample, our results cannot be projected. We visited departmental headquarters, major commands, and selected maintenance activities. We interviewed service officials responsible for data collection, and we reviewed the reported data for accuracy and completeness. We compared reported amounts to funding documents, contracts, and accounting reports for selected programs for all the military services, but we placed greater emphasis on the Army data because the Army was close to the 50 percent threshold for fiscal year 2005. To determine the actions taken by the Office of the Secretary of Defense (OSD) and military departments to improve the quality of the reported 50- 50 data and implementation of GAO’s prior year’s recommendations, we reviewed the results of studies conducted by the service audit agencies and reconciled areas of concern identified during prior years’ audits. We also reviewed prior years’ recommendations to find out whether known problem areas were being addressed and resolved. We discussed with officials actions they took to improve 50-50 data gathering and reporting processes. To determine the reasonableness of fiscal year 2006 and 2007 projections, we discussed with service officials how they developed their projections and whether historical funding information and known increases in funding were included in their projections. Our analysis of the data for fiscal years 2006 and 2007 was limited because our current and past work on this issue has shown that DOD’s 50-50 data cannot be relied upon as a precise measure of allocation of depot maintenance funds between the public and private sectors. We discussed with Air Force officials reasons for the increase in their fiscal year 2007 projection and their plans to avoid breaching the 50 percent limitation. In accomplishing our objectives, we interviewed officials, examined documents, and obtained data at the Office of the Secretary of Defense, Army, Navy, Marine Corps, and Air Force headquarters in the Washington, D.C., area; Anniston Army Depot in Anniston, Ala.; Red River Army Depot in Texarkana, Tex.; Army Material Command in Alexandria, Va.; Tank- automotive and Armaments Command (TACOM) Life Cycle Management Command in Warren, Mi.; Naval Air Systems Command in Patuxent River, Md.; U.S. Fleet Forces Command in Norfolk, Va.; Air Force Materiel Command in Wright-Patterson Air Force Base, Oh.; Marine Corps Logistics Command in Albany, Ga.; and Army, Navy, and Air Force Audit Services. We conducted our work from March 2006 to September 2006 in accordance with generally accepted government auditing standards. Our review of the Army’s data supporting the Department of Defense’s (DOD) fiscal year 2005 50-50 report identified the following adjustments. Key contributors to this report include Thomas Gosling, Assistant Director; Connie W. Sawyer, Jr.; Janine Cantin; Clara Mejstrik; Stephanie Moriarty; and Renee Brown. Depot Maintenance: Persistent Deficiencies Limit Accuracy and Usefulness of DOD’s Funding Allocation Data Reported to Congress. GAO-06-88. Washington, D.C.: November 18, 2005. Depot Maintenance: DOD Needs Plan to Ensure Compliance with Public- and Private-Sector Funding Allocation. GAO-04-871. Washington, D.C.: September 29, 2004. Depot Maintenance: Army Needs Plan to Implement Depot Maintenance Report’s Recommendations. GAO-04-220. Washington, D.C.: January 8, 2004. Depot Maintenance: DOD’s 50-50 Reporting Should Be Streamlined. GAO-03-1023. Washington, D.C.: September 15, 2003. Department of Defense: Status of Financial Management Weaknesses and Progress Toward Reform. GAO-03-931T. Washington, D.C.: June 25, 2003. Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight. GAO-03-16. Washington D.C.: October 18, 2002. Depot Maintenance: Management Attention Needed to Further Improve Workload Allocation Data. GAO-02-95. Washington, D.C.: November 9, 2001. Depot Maintenance: Action Needed to Avoid Exceeding Threshold on Contract Workloads. GAO/NSIAD-00-193. Washington, D.C.: August 24, 2000. Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Threshold. GAO/T-NSIAD-00-112. Washington, D.C.: March 3, 2000. Depot Maintenance: Future Year Estimates of Public and Private Workloads Are Likely to Change. GAO/NSIAD-00-69. Washington, D.C.: March 1, 2000. Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain. GAO/NSIAD-99-154. Washington, D.C.: July 13, 1999. Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved. GAO/NSIAD-98-175. Washington, D.C.: July 23, 1998. Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations. GAO/NSIAD-98-41. Washington, D.C.: January 20, 1998. Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers. GAO/NSIAD-96-166. Washington, D.C.: May 21, 1996.
Under 10 U.S.C. 2466, the military departments and defense agencies may use no more than 50 percent of annual depot maintenance funding for work performed by private-sector contractors. The Department of Defense (DOD) must submit a report to Congress annually on the allocation of depot maintenance funding between the public and private sectors for the preceding fiscal year and projected distribution for the current and ensuing fiscal years for each of the armed forces and defense agencies. As required by Section 2466, GAO reviewed the report submitted in April 2006 and is, with this report, submitting its view to Congress on whether (1) the military departments and defense agencies complied with the 50-50 requirement for fiscal 2005 and (2) the projections for fiscal years 2006 and 2007 represent reasonable estimates. GAO obtained data used to develop the April 2006 report, conducted site visits, and reviewed supporting documentation. Although DOD reported to Congress that it complied with the 50-50 requirement for fiscal year 2005, GAO could not validate compliance due to weaknesses in DOD's financial management systems and the processes used to collect and report 50-50 data. DOD's April 2006 report provides an approximation of the depot maintenance funding allocation between the public and private sectors for fiscal year 2005. GAO identified errors in the reported data which, if adjusted, would increase the Army's private-sector funding allocation percentage from 49.4 percent to 50 percent. GAO found that 50-50 funding allocation data were not being consistently reported because some maintenance depots were reporting expenditures rather than following Office of the Secretary of Defense (OSD) guidance and reporting obligations. Combining obligations and expenditures produces an inaccurate accounting of 50-50 funding allocations. GAO also found amounts associated with interservice depot maintenance work may not accurately reflect the actual allocation of private- and public-sector funds because visibility over the allocation of these funds is limited. OSD guidance requires that the military departments establish measures to ensure correct accounting of interservice workloads. In prior years' reports on DOD's compliance with the 50-50 requirement, GAO discussed deficiencies limiting data accuracy and recommended specific corrective actions. While DOD has taken some additional actions to improve the quality of reported data for fiscal year 2005, it has not fully addressed the persistent deficiencies that have limited 50-50 data accuracy. Reported projections do not represent reasonable estimates of public- and private-sector depot maintenance funding allocations for fiscal years 2006 and 2007 due to data inaccuracies. Errors GAO identified for fiscal year 2005 could affect these projections. If the adjustments GAO made to the Army's fiscal year 2005 data--increasing the private-sector percentage by about 0.6 percentage points--are carried forward, it could move the Army's projection to within 2 percent of the 50 percent limitation for fiscal year 2007. GAO also found that the projected numbers often did not include supplemental funds, which could change the allocation percentages. These errors and omissions affect the reasonableness and accuracy of the reported projections. To avoid breaching the 50 percent threshold in future years, the Air Force is implementing its plan to ensure compliance with the 50-50 requirement until fiscal year 2010. The plan involves moving some maintenance workload, including the F-100 engine, from the private sector to the public sector.
According to information contained in the mandate for this report, the number of WOSBs in the United States increased by 78 percent between 1987 and 1996, almost twice the rate of growth of all U.S. businesses. Also, approximately 8 million WOSBs in the United States provide jobs for over 15 million individuals and generate almost $1.4 trillion in sales each year. The administration and the Congress have long been concerned about the disparity between WOSBs’ prevalence in the economy and the level of government procurements of their products and services. In 1979, when an executive order first made SBA responsible for negotiating WOSB contracting goals with federal agencies, WOSBs received only 0.2 percent of all federal procurements. By 1988, this percentage had grown to only 1 percent, and although legislation was enacted to provide a program of assistance and support to WOSBs, no statutory goal for their participation in federal procurements was established until 1994. Section 7106 of the Federal Acquisition Streamlining Act of 1994 (FASA) amended the Small Business Act to require establishment of a governmentwide goal for participation by WOSBs in procurement contracts of not less than 5 percent of the total value of all prime contract and subcontract awards for each fiscal year. The goal was implemented by procurement regulations effective in fiscal year 1996. The FASA conference report indicated that the 5-percent goal was not intended to create a new set-aside or program of restricted competition for WOSBs, but rather to establish a target that would result in greater opportunity for WOSBs to compete for federal contracts. The report recognized that, given the slow progress toward increasing contracting with WOSBs, it could take some time before the goal would be reached. Through FASA, the governmentwide goal of 5 percent for WOSBs joined the existing governmentwide contracting goals in the Small Business Act for small business concerns (then no less than 20 percent of the total value of all prime contract awards for each fiscal year) and small disadvantaged businesses (SDB) (no less than 5 percent of the total value of all prime contract and subcontract awards each year). Under the Small Business Act, all small business goals are to represent for each procuring agency, the “maximum practicable opportunity” for small businesses’ participation in that agency’s contracts. In addition, an agency’s goals are to “realistically reflect the potential of small business concerns” to perform such contracts and subcontracts. At the same time, the cumulative annual goals for all agencies are to meet or exceed the annual governmentwide goal. Also, agencies are to make a consistent effort to annually expand participation by small business concerns in their contracts. Agencies are to report each year to SBA on the extent of participation by small businesses, including WOSBs, as well as any justifications for failure to meet the goals. SBA, in turn, is required to report this information to the President. FASA added WOSBs to the existing policy that small businesses and small disadvantaged businesses have the maximum practicable opportunity to become subcontractors for federal contracts exceeding $100,000 and to receive timely payment from prime contractors. FASA also included WOSBs in the requirement that for contracts exceeding $500,000 (or $1 million for construction contracts) prime contractors prepare subcontracting plans that provide the maximum practicable opportunity for small businesses to participate in the performance of the prime contract. FASA also required agencies to report contracts over a certain dollar threshold with WOSBs to the Federal Procurement Data Center (FPDC). Quarterly, approximately 70 executive branch agencies report contracting data either by individual contract for acquisitions above $25,000 or as summary data for acquisitions at or below $25,000 to FPDC. Twenty of these agencies account for over 99 percent of federal contract expenditures, 4 account for over 85 percent, and 1—the Department of Defense—accounts for over 64 percent. If a business submitting an offer for a federal procurement represents that it is a small business concern, and meets the definition of a WOSB, it can “self-certify” as a WOSB when it completes the small business program representations required in solicitations for procurements above $2,500. In doing so, the business represents (by checking the appropriate box) that it is, or is not, a WOSB. Generally, a contracting agency will accept the self- representation to be accurate. The federal government does not currently require that WOSBs submitting offers as prime contractors on federal procurements receive certification (from SBA or an outside entity) of their status as WOSBs. However, if a WOSB submits an offer for a federal subcontract, the prime contractor may, according to SBA, require certification that the business is in fact woman-owned. Furthermore, a WOSB submitting an offer as an SDB, 8(a) firm, or HUBZone small business will be required to meet the certification requirements for those programs. Both the Congress and the Administration have recently reiterated concern about the continued disparity between the number of WOSBs in the economy and the extent of the government’s contracting with them. As of fiscal year 1999, when women-owned businesses made up 38 percent of all businesses in the United States, WOSBs received 2.5 percent of the approximately $189 billion in federal prime contracts awarded that year. This discrepancy led the Senate to adopt Resolution 311 on May 23, 2000, which urged the President to adopt a policy supporting the 5-percent WOSB contracting goal, encouraged agencies to make concerted efforts to meet the goal before the end of fiscal year 2000, and holds agencies accountable for achieving the goal. Also on May 23, President Clinton, in culmination of work by the Interagency Committee on Women’s Business Enterprise, issued Executive Order 13157 to reaffirm the government’s commitment to increasing opportunities for WOSBs in the federal procurement market. The order reiterated executive branch policy to take the steps necessary to meet or exceed the 5-percent governmentwide WOSB contracting goal and to implement this policy by establishing separate 5-percent governmentwide goals for both prime contract awards and subcontract awards each fiscal year. The order requires each agency with procurement authority to develop a long-term comprehensive strategy to expand opportunities for WOSBs. The order lists methods and programs these agency strategies should include, such as designating a senior acquisition official who will work with SBA to identify and promote contracting with WOSBs; requiring contracting officers, to the maximum extent practicable, to include WOSBs in competitive acquisitions; implementing procedures for acquisition planners to structure acquisitions (including multiple award contracts) and provide guidance to facilitate competition among small businesses, HUBZone small businesses, SDBs, and WOSBs; implementing mentor-protégé programs that include WOSBs; and offering outreach, training and technical assistance programs to assist WOSBs in developing their products, skills, business planning practices, and marketing techniques. The order further directs agencies, when feasible and consistent with the effective and efficient performance of their missions, to establish 5-percent WOSB goals for both prime and subcontract awards each fiscal year. SBA informs us that as a result of this executive order, it is moving from a negotiated agency goal process to an assigned process whereby SBA will set goals with possible input from the agencies. SBA may start setting all agencies’ goals at 5 percent, although officials believe the 5-percent goal is unattainable for some agencies, which means that other agencies will have goals set at a higher point for the federal government to meet the 5- percent governmentwide goal. The executive order further establishes certain responsibilities for SBA and instructs SBA to establish an Assistant Administrator for Women’s Procurement (within SBA’s Office of Government Contracting). This official will head the Office of Federal Contract Assistance for Women Business Owners and coordinate agencies’ efforts to achieve the WOSB goals. Specifically, this office will be responsible for working with each agency to develop and implement policies to achieve WOSB contracting goals and advising agencies on how to increase WOSB contracting; evaluating whether agencies are meeting their WOSB contracting goals on a semiannual basis and preparing a report to the President (through the SBA Administrator, the Interagency Committee on Women’s Business Enterprise, and OFPP) on findings regarding contract awards to WOSBs; making recommendations and working with agencies to increase WOSB contracting and taking corrective actions with those agencies not meeting the 5-percent goal; providing a program of training and development seminars and conferences to instruct WOSBs on participation in SBA’s 8(a), SDB, and HUBZone programs, and other small business contracting programs for which WOSBs might be eligible; and developing and implementing a single uniform federal governmentwide Web site that is linked to other acquisition, small business, and women- owned business sites and provides current procurement information to WOSBs and other small businesses. A recent change to the government’s procurement program for WOSBs emerged from the Small Business Reauthorization Act of 2000, which amended the Small Business Act to give federal agencies authority to restrict competition for certain contracts to certified WOSBs that are economically disadvantaged. The authority, which is permissive, not mandatory, is limited to contracts not exceeding $3 million ($5 million for manufacturing) in those industries SBA identifies as underrepresented by WOSBs in federal procurement. The act requires SBA to conduct a study to pinpoint such industries. Federal contracting officers must also have a reasonable expectation that two or more WOSBs will submit offers and that the contract can be awarded at a fair and reasonable price before the officers may exercise their new authority. SBA may waive the requirement for businesses owned by women who are economically disadvantaged if the business is in an industry in which small businesses owned and controlled by women are “substantially” underrepresented. The new legislation will require WOSBs participating in the program to be certified as such by a government or outside entity and requires SBA to establish procedures to verify the WOSBs’ eligibility and the accuracy of any certifications. SBA currently has several initiatives under way in coordination with federal agencies and departments to work towards meeting the WOSB goals. For example, SBA offers training workshops for women business owners and designates agency liaisons (Federal Agency Advocates for Women) who are to strive to expand the pool of WOSBs receiving federal contracts. In addition, according to SBA, its Office of Federal Contract Assistance to Women Business Owners has been working with the newly appointed senior acquisition officials in each agency who have been selected to help their respective agencies increase federal procurement opportunities for WOSBs. We examined a number of trends pertaining to government contracting with WOSBs since fiscal year 1996 to determine changes, patterns, and progress towards increasing federal contracting and meeting both the governmentwide and agency-specific goals for both prime and subcontracting with WOSBs. We compared the government’s overall expenditures for prime and subcontracts with its expenditures for prime and subcontracts with WOSBs. From SBA data, we examined governmentwide and agency trends toward meeting the goals for prime and subcontracts with WOSBs. We also reviewed the performance of the 20 largest government procurement agencies in meeting their annual negotiated goals and analyzed how this effort relates to their share of prime and subcontracts awarded to WOSBs. Finally, we examined the variation in the dollar amounts represented by agency-specific WOSB contracting goals and WOSBs’ access to government contracts through other small business programs. Over the last several years, contracting with WOSBs grew more rapidly than government contracting with all businesses. From fiscal year 1996 through fiscal year 1999, federal expenditures for prime contracts awarded by the approximately 70 executive branch agencies that report procurement contract obligations to FPDC increased by 7.5 percent in real terms, from $184.9 billion to $198.8 billion, while these agencies’ expenditures for prime contracts with WOSBs increased by over 31 percent, from $3.2 billion to $4.2 billion. In addition, these agencies’ expenditures on all subcontracts increased by more than 15 percent, from $63.9 billion in 1996 to $73.8 billion in 1999, while their expenditures for subcontracts awarded to WOSBs increased by nearly 48 percent, from $2.3 billion to $3.4 billion. Thus, these agencies’ expenditures for WOSB prime contracts grew over 4 times more rapidly than their expenditures for all prime contracts and over 3 times more rapidly for WOSB subcontracts than for all subcontracts. Table 1 shows trends in both total contract expenditures and expenditures for prime and subcontracts awarded to WOSBs since fiscal year 1996. Since fiscal year 1996, when the 5-percent governmentwide WOSB contracting goal was implemented, the share of the government’s expenditures for prime contracts awarded to WOSBs has changed very little. The share of expenditures for subcontracts awarded to WOSBs has only modestly increased. Moreover, the separate 5-percent governmentwide WOSB goals for prime contracts and subcontracts were not met in any of the 4 years. WOSBs’ share of prime contract expenditures in 1999 was 2.5 percent, the highest for any of the 4 years, but only one-half of the 5-percent goal. Although greater progress was made over this period in meeting the governmentwide WOSB goal for subcontracts, that goal was not met during the 4 years. The share of subcontract expenditures for WOSBs increased from fiscal year 1996 to its highest point in fiscal year 1998, but decreased slightly the next year. Figure 1 shows the trends in the government’s share of prime contract and subcontract expenditures for WOSBs since fiscal year 1996. From a historical perspective, WOSBs’ share of total federal procurement grew from 0.2 percent in 1979 to 2.5 percent in 1999. Of the 20 federal agencies that account for about 99 percent of annual federal contract expenditures, none has met its annual WOSB goals for both prime and subcontracting each year since fiscal year 1996. More of these agencies had success meeting their WOSB subcontracting goal than their prime contracting goal over this period—each year at least one-half of the agencies met their subcontracting goal while only about one-third met their prime contracting goal. VA, State, and NASA were most successful, meeting or exceeding both goals 3 of the 4 years. For prime contracts, 7 or fewer of the 20 largest agencies met their WOSB goal in each of the 4 years. As illustrated in figure 2, 7 of the 20 agencies met their prime goal in fiscal years 1996 and 1997; the following year, only 3 agencies met their prime goal. In fiscal year 1999, six agencies reached their prime goal. Only VA met or exceeded its WOSB prime goal each year; State and NASA met or exceeded their prime goal in 3 of the 4 years. For subcontracts, more of these agencies met their WOSB goals. As shown in figure 2, at least 10 of the 20 largest agencies met their subcontracting goal each year from fiscal year 1996 through fiscal year 1999. The greatest success came in fiscal year 1998, when 13 agencies met their subcontracting goal. Five agencies—the U.S. Agency for International Development (AID), NASA, Interior, State, and Treasury—met their subcontracting goal each year. Further detailed information on each of these 20 agencies’ share of prime and subcontracts with WOSBs and their WOSB goals is shown in appendix II. In terms of meeting their individual WOSB contracting goals, it did not always matter whether these 20 agencies increased or decreased their share of expenditures for contracts with WOSBs. Some increased their share of prime or subcontract expenditures for WOSBs from fiscal year 1996 through fiscal year 1999 but never reached their goals, whereas others decreased their share each year and still met or exceeded their WOSB goals. For example, EPA increased its share of expenditures for prime contracts with WOSBs by about 50 percent between fiscal year 1996 and fiscal year 1999, yet it never met its WOSB prime contracting goal (which doubled over the period). On the other hand, VA’s share of prime contract expenditures for WOSBs decreased from 5.8 percent in fiscal year 1996 to 5.6 percent in fiscal year 1999, yet VA met its goal each year, as its goal increased from 4 percent to 5 percent over the period. The same inconsistency emerged in these agencies’ shares of subcontracts with WOSBs. Sixteen agencies maintained or increased their share of subcontract expenditures for WOSBs between fiscal year 1996 and fiscal year 1999, but only four of these agencies—State, NASA, Interior, and Treasury—met their WOSB subcontracting goal each year. AID reduced its share of subcontract expenditures for WOSBs between fiscal year 1996 and fiscal year 1999 yet reached its WOSB subcontracting goal each year. Correlating agencies’ success in meeting their WOSB goals to their success in increasing shares of prime and subcontract expenditures for WOSBs is complicated by the fact that individual agencies’ goals can and sometimes do move up or down each year. Moreover, there was not always a correlation between an agency’s level of procurement from WOSBs and its goals. For example, some agencies with relatively low shares of prime contracts awarded to WOSBs appear to have been able to negotiate lower goals with SBA. NASA, with a historically low level of procurements from WOSBs, had a WOSB prime contracting goal of 1.4 percent in fiscal year 1999 (and exceeded its goal by awarding 1.64 percent of prime contracts to WOSBs). In the same year, Justice had a goal of 3 percent and exceeded its goal by awarding 3.27 percent to WOSBs. Others, like DOD and the Department of Education, had rates of prime contract awards consistently under 2 percent over the period, yet both had goals of 5 percent in fiscal year 1999. Government contract expenditures are concentrated in just a few agencies and dominated by DOD. Table 2 shows the share of total prime contract expenditures and WOSB prime contract expenditures by the top 20 agencies (as well as all others combined) in fiscal year 1999. As shown in the table, the four largest procuring agencies—DOD, DOE, NASA, and GSA—together accounted for over 82 percent of all federal prime contracts. The same four agencies accounted for about 71 percent of the contracts with WOSBs. In fiscal year 1999, DOD alone accounted for over 64 percent, or nearly $120 billion, of all federal prime contract expenditures. DOD awarded 1.9 percent of its prime contracts to WOSBs, representing approximately $2.3 billion or about 50 percent of all federal prime contract expenditures with WOSBs in 1999. For two of the top four agencies, their share of the total procurement from WOSBs is smaller than their share of total federal procurement. For example, DOD buys 50.2 percent of the total amount the government purchases from WOSBs but it accounts for 64.4 percent of the value of all federal procurements. Similarly, NASA, with its 4-percent share of total procurements from WOSBs, accounts for 5.9 percent of total federal procurements. The two other largest procuring agencies accumulated a larger share of total procurements from WOSBs than their share of total federal procurements. DOE accounts for 8.7 percent of federal purchases from WOSBs while accounting for 8.4 percent of total procurement, and GSA accounts for 7.7 percent of federal procurements from WOSBs while accounting for 4 percent of federal procurements. Without benchmarks of realistic WOSB contracting goals for individual agencies, it is unclear whether the level of an agency’s contracting with WOSBs represents successful outreach efforts or a better match between what the agency buys and what WOSBs produce in these industries. Nevertheless, the dominance of DOD in both total contracts and contracts with WOSBs justifies special attention. The government’s success or failure in meeting the annual 5-percent governmentwide goals for prime and subcontracting with WOSBs depends to a large extent on DOD’s ability to meet its WOSB goals. Because DOD achieved less than half of its 5-percent goal for prime contracts with WOSBs in fiscal year 1999, the governmentwide goal for prime contracting with WOSBs could not have been met even if every other federal agency had reached its WOSB prime contracting goal. Only by substantially exceeding their cummulative WOSB goals for prime contracts could other federal agencies have compensated for DOD’s shortfall. In fiscal years 1996 through 1999, WOSBs received a large majority of new federal prime contract awards on the basis of their status as another type of small business rather than as a WOSB. Our analysis of FPDS data on new prime contract awards for each of these years showed that WOSBs received the majority of their federal contract dollar awards as SDBs or 8(a) firms. In fiscal year 1996, about 57 percent of the new federal contract dollars awarded to WOSBs were awarded to WOSBs that qualified for the contracts as another type of small business. In fiscal years 1997, 1998, and 1999, the figure was about 70 percent. Table 3 illustrates the expenditures for new contracts with WOSBs and the expenditures for WOSBs that qualified for the contracts under other small business programs since fiscal year 1996. We found wide consensus among the government contracting officials we contacted about obstacles to increasing government contracting with WOSBs. In fulfilling the mandate for this review, we interviewed officials throughout the federal government—chief procurement officers, line contracting officers, and small business advocacy officials at agencywide and program-specific levels—and solicited their views based on their direct experiences and responsibilities for federal contracting and meeting socioeconomic goals. These officials most frequently cited two obstacles to increasing federal contracting with WOSBs: the numerous and complex federal contracting programs for small the absence of a specific contracting program targeting WOSBs. Other obstacles cited by officials but with less consensus included the practice of contract consolidation (including bundling), which they believe can deny a reasonable opportunity for WOSBs and other small businesses to compete for some procurements; a lack of commitment or accountability of agency executives, contracting officials, and/or program managers to increasing contracting with WOSBs and meeting the WOSB goals; a lack of sufficient WOSB access to working capital; a lack of qualified WOSBs competing in some areas; and resource constraints that limit federal agencies’ efforts to monitor and enforce the plans submitted by prime contractors for subcontracting with small businesses, including WOSBs. While recognizing the value and public policy purpose of meeting socioeconomic goals for small businesses, government contracting officials at all levels told us that they were generally overwhelmed by the number and complexity of the requirements of small business contracting programs and their related goals. More specifically, these officials believe that the programs tend to crowd out WOSBs. They stated that, for some procurements, federal agencies are required by law to consider and give preference to certain categories of small businesses other than WOSBs when awarding contracts and that each of these programs has different rules, regulations, and eligibility criteria. Also, they said that the type of program for each small business category varies. For example, depending on the small business program involved, some procurements may be set aside for exclusive small business participation, some procurements may use price evaluation adjustments, and others may be conducted on a sole- source basis. Table 4 shows the different statutory governmentwide small business contracting goals. The following small business contracting programs are used, as applicable, to reach the small business goals set forth in table 4. Small business reservation ($2,500 to $100,000) Small business set-asides (total or partial) 8(a) Business Development Program (sole-source or competitive) Emerging small business set-asides of the Small Business Competitive Very small business set-asides of the Very Small Business Pilot Program HUBZone small business set-asides (competitive or noncompetitive) HUBZone price evaluation preference SDB price evaluation adjustment SDB participation program (sole selection factor or monetary incentive for actual SDB subcontracting) Subcontracting plans for small businesses, SDBs, HUBZone small businesses, WOSBs, and veteran-owned small businesses The officials said that these small business programs both potentially reduce the number of contracts available to WOSBs and cause contracting officers to spend significant amounts of time administering the programs’ often complex implementation requirements. Thus, they said that the time available for them to reach out to WOSBs for contracting purposes is significantly reduced. Officials noted that the situation has been exacerbated by reductions in the acquisition workforce and the addition of new small business contracting programs and requirements. It was also noted that before using sources such as small businesses, contracting officials are required to consider using certain other sources of supply, such as Federal Prison Industries and the Committee for Purchase from People Who Are Blind or Severely Disabled, and this potentially reduces contracts available to WOSBs. As the list following table 4 indicates, WOSBs had no vehicle that helped contracting officials target prime contract awards to them before the Small Business Reauthorization Act of 2000 was enacted. According to the contracting officials we contacted, unless a WOSB also meets the requirements of one of these other small business programs, the WOSB will usually have to compete with other businesses in these targeted groups and other businesses for federal contracts. Furthermore, they said that, depending on the procurement, government agencies might be required to provide targeted contracting opportunities to these other groups. An OFPP official also said that this could result in other small businesses receiving preference over WOSBs. Thus, officials generally agreed that without a specific vehicle or targeted contracting program, agencies cannot as effectively award contracts to WOSBs as they can to some other small business and meet their WOSB contracting goals. According to government procurement officials, reductions in the acquisition workforce have increased the practice of contract consolidation by agencies, which reduces the opportunities for WOSBs to obtain some government contracts. They said that, to streamline and reduce contract administration costs, federal agencies sometimes combine a number of smaller contracts, which individually might be sought by and awarded to small businesses, including WOSBs, into fewer contracts. According to these officials, after the consolidation, the contract requirements sometimes become too large, complex, or geographically dispersed to be managed by a small business, thus making it more difficult to award these contracts to a WOSB or any small business. A subset of consolidated contracts has been defined by the Small Business Reauthorization Act of 1997 (P. L. 105-135) as “bundled contracts.” Specifically, the act defines bundling of contract requirements as the consolidation of two or more procurement requirements for goods or services previously provided or performed under separate, smaller contracts into a solicitation of offers for a single contract that is likely to be unsuitable for award to a small business concern because of specified factors. The act requires federal agencies to avoid unnecessary and unjustified bundling of contract requirements that precludes small businesses from participating in procurements as prime contractors. It also requires each federal agency to promote small businesses’ participation by structuring its contracting requirements to facilitate competition by and among small businesses. Representatives of WOSBs generally regard contract bundling as an obstacle to increasing their contracting with the federal government. However, we have been unable to confirm this. In conducting a review of contract bundling earlier this year, we reported that data were not currently available to determine its impact on small businesses. We recommended that SBA develop a strategy setting forth how the agency can best achieve the results desired from oversight of contract bundling by considering the staffing resources and training needed, the timely resolution of potential bundling cases, and constraints the agency faces in implementing the strategy. We also reported that the only study concluding that contract bundling negatively affected small businesses provided no convincing evidence that contract bundling had adversely affected small businesses. According to some of the government procurement officials with whom we spoke, some federal agency officials are not committed to increasing contracts with WOSBs and meeting the WOSB contracting goals. They said that some agencies do not hold their procurement officials accountable for meeting the goals, so contracting personnel there are not fully committed to using the tools available to them for increasing procurement opportunities for small businesses, including WOSBs, and meeting agency WOSB contracting goals. Several contracting officials said that support for and commitment to the goals at the highest organizational levels within agencies were needed for contracting personnel to be committed to increasing contract awards to WOSBs and meeting the related goals. They said that this support and commitment are lacking; frequently, no individual within an agency is responsible and held accountable for meeting WOSB goals. A contracting officer at DOE told us he believes that an important reason DOE does as well as it does in meeting its WOSB goals is the strong support throughout the agency for awarding contracts to WOSBs. DOE officials further noted that the performance expectations and pay considerations of its contracting officials are linked to DOE’s achievement of the goals. Similarly, procurement officials from NASA told us that the performance expectations for NASA’s contracting officers include meeting small business contracting goals, including those for WOSBs. They believe that this expectation is partially responsible for NASA’s meeting many of its small business contracting goals. A study by the National Women’s Business Council (NWBC) also emphasized the importance of accountability. It concluded that a key element to any successful supplier diversity program is commitment from the top. The study said that supplier diversity programs are successful only when an organization’s senior officials promote women as suppliers and vendors and include them as part of the acquisition strategy. Representatives from the Women’s Business Center (WBC) Washington, D. C. Metropolitan Area, and from the Dallas-Ft. Worth Chapter of National Association of Women Business Owners (NAWBO) told us that the limited support and accountability for increasing contracting with WOSBs from officials within federal agencies has hindered the growth of contracting with WOSBs. A representative from the Women’s Business Enterprise National Council (WBENC) also told us that she believes some agencies do not reach their contracting goals because federal contracting officials do not look hard enough for available WOSBs. A procurement official and representatives of some of the women business- owners associations told us that a lack of sufficient access to working capital discourages some WOSBs from competing for government contracts. To meet the needs of ongoing capital expenses, contractors sometimes receive contract financing, such as progress payments based upon costs incurred in certain large fixed-price contracts and subcontracts in which the first delivery occurs several months after the award has been made. Under the Federal Acquisition Regulation, the customary progress payment rate is 85 percent of the total costs incurred under a contract with a small business. However, some believe that such provisions do not always provide WOSBs with the amount of working capital necessary to compete for government contracts. Some agencies have instituted higher customary progress payment rates for small businesses and SDBs. For example, DOD has customary progress payment rates of 90 percent for small businesses and 95 percent for SDBs. According to some contracting officials, some agencies have difficulty meeting their WOSB contracting goals because few qualified WOSBs compete for government contracts in the fields in which those agencies are procuring goods or services. They said that this obstacle remains, even though many agencies have developed outreach efforts to find and encourage WOSBs to compete for government contracts. Some of these officials said that WOSBs do not always compete for government contracts because they perceive the federal procurement process as too complex, they lack the expertise to meet many of the procedural requirements, or both. One senior procurement executive with 15 years of experience stated that even though women business-owners associations and others generally say that women-owned businesses, including WOSBs, are not given opportunities to participate in the procurement process, he believes that the problem often is that these businesses do not seek contracting opportunities with the federal government. A representative from an association of women business owners we contacted agreed that some WOSBs view the federal procurement process as complex and costly. She conjectured that this complexity, combined with an anticipation of limited success in winning awards, might keep many WOSBs from submitting offers for government contracts. According to federal procurement officials with whom we spoke, reductions in the federal acquisition workforce mean that fewer agency contracting personnel must meet an increasing workload. They said that they often lack the resources to effectively oversee and administer contractors’ performance. Thus, according to these officials, agencies’ contracting personnel do not always monitor and enforce plans submitted by prime contractors for subcontracting with small businesses, including WOSBs. Without appropriate monitoring and enforcement, these officials said, prime contractors do not always follow through with their plans to award small business subcontracts. Similarly, an official of GSA’s Office of Enterprise Development stated that often little oversight or enforcement occurs after a prime contractor’s plan for subcontracting with small businesses is approved. The official said that her office is constantly asked by small business representatives to request changes to legislation that would provide for greater enforcement of subcontracting plans and make government contracting officers accountable if prime contractors did not honor these plans. Finally, she said that because prime contractors have such an uneven record of using WOSBs and other small businesses, some WOSBs (as well as other small businesses in general) become frustrated and discouraged from pursuing federal subcontracting opportunities. An opportunity for improving the enforcement of subcontracting plans is presented by the recent executive order for increasing opportunities for WOSBs. The order requires each agency to work closely with SBA, OFPP, and others to develop procedures to increase compliance by prime contractors with subcontracting plans, including subcontracting plans involving WOSBs. Among the government contracting officials with whom we spoke, there was general agreement on several suggestions for improving the environment for contracting with WOSBs and increasing federal contracting with WOSBs. They suggested creating a contract program targeting WOSBs, focusing and coordinating federal agencies’ WOSB outreach activities, promoting contracting with WOSBs through agency incentive and including WOSBs in agency mentor-protégé programs, providing more information to WOSBs about participation in teaming providing expanded contract financing. Simultaneously, officials cautioned that some of these suggestions could lead to some unintended consequences (notably a possible reduction in procurements from other small business groups) and that even if the suggestions were implemented, the 5-percent governmentwide goals might still not be achievable. Federal contracting officials, particularly those in PEC and the OSDBU Council, generally agreed that a targeted contracting program, such as one that would grant authority to restrict competition for contracts specifically to WOSBs, could help increase the number of contracts awarded to WOSBs. In addition, 22 of the 30 contracting officers we contacted from four federal agencies awarding large numbers of contracts each year stated that such a program could be a valuable tool in their efforts to increase contracting with WOSBs and help them achieve their WOSB contracting goals. Despite the broad consensus on the need for a formalized vehicle to target qualified WOSBs for federal contracts, many of these same officials expressed concern that creating such a program would add to the already large and complex universe of federal contract programs for small businesses. Also, some of these officials said they recognize that such a program would not guarantee that the 5-percent governmentwide goals for prime and subcontracts would be achieved. For example, not enough qualified WOSBs might exist in those industries in which much federal procurement occurs, meaning that a new contracting program might not be sufficiently effective in increasing contracting with WOSBs to achieve the governmentwide goals. In addition, officials said that because a number of small business groups already must be considered for government contracts, they would have to make choices between competing small business groups and, ultimately, there might not be enough contracts to go around. Thus, while WOSBs might benefit from a new targeted contracting program, such a program might compete with existing programs for other small business groups. To combat this problem, some officials suggested broadening or combining existing programs to incorporate WOSBs (such as broadening the definition of a small disadvantaged business to encompass all WOSBs). Several of the representatives from the women’s groups we contacted believed that a targeted contracting program would help to accelerate progress toward achieving the WOSB goals. The Small Business Reauthorization Act of 2000 authorized a targeted contracting program for certain WOSBs. Contracting officers are authorized to restrict competition for contracts for supplies or services in certain industries when specific conditions are met (the contract is for goods or services in an industry identified by SBA where WOSBs are underrepresented in federal procurement, two or more WOSBs who are economically disadvantaged are expected to compete, the award price does not exceed $3 million—or $5 million for manufacturing—and the award can be made at a fair and reasonable price). The program’s implementation, however, will require several actions by SBA, including (1) the completion of a study to identify industries in which WOSBs are underrepresented and substantially underrepresented in federal procurement; (2) the establishment of a process for approving federal, state, or national certifying entities to complete certifications of WOSBs; (3) the establishment of standards for documentation to be required by contracting officers to support certifications of WOSBs; (4) the development of criteria for determining industries where WOSBs are “substantially” underrepresented and therefore eligible for a waiver of the requirement that WOSB firms be economically disadvantaged to benefit from the program; and (5) the establishment of procedures to challenge firms’ eligibility and a program to verify firms’ eligibility. Contracting officials from the Air Force, NASA, DOE, and GSA said that their respective agencies have active and focused WOSB outreach programs. They said that these programs attempt to attract WOSBs to contracting opportunities and that they teach these businesses how to compete for federal contracts, navigate through the federal procurement process, and complete necessary paperwork. For example, Air Force small business advocates told us they had effective outreach efforts educating WOSBs on technical requirements and encouraging WOSBs to gain needed skills. Other officials with whom we spoke said that giving greater effort and focus to some federal agencies’ outreach activities could increase the number of contracts with WOSBs. In particular, PEC officials agreed that agencies needed to be more collaborative in their outreach efforts to WOSBs. PEC and other officials said that greater emphasis by federal agencies on the development of training programs and seminars for WOSBs could help increase their participation in the federal procurement process. Specifically, they said that with more focused outreach programs, agencies could (1) better identify qualified WOSBs in specific industries and (2) encourage WOSBs to participate in the federal contracting arena (by teaching them about the federal procurement process and strategies for preparing more competitive contract offers). While some officials suggested more focused outreach programs, PEC and DOD officials viewed greater coordination and consolidation of agencies’ outreach efforts as necessary to make the most of government and WOSB resources. Specifically, these officials believed such coordination would reduce overlap and duplication of outreach efforts by agencies, reduce the multiplicity of outreach conferences faced by individual WOSBs, and provide more comprehensive and relevant sources of procurement information and networking opportunities to WOSBs. Officials stated that proper coordination would allow WOSBs to be more selective in the outreach conferences they attend and enable government officials to provide them with greater access to key program and contracting personnel. The recent executive order on increasing opportunities for WOSBs requires federal agencies to work with SBA in making outreach efforts and preparing plans to target WOSBs for greater participation in the procurement process. Implementation of the executive order could provide an appropriate opportunity to address the suggestion for greater coordination and consolidation of the government’s outreach efforts. A number of the procurement officials we contacted said that recognition and incentive programs should be developed or improved to increase federal contracts with WOSBs. They said that such programs increase awareness of the importance the agency places on the WOSB goals and recognize the accomplishments of agency procurement officials, as well as those of prime contractors who successfully meet their plans for subcontracting with WOSBs. They cited a Treasury program that annually rewards staff and prime contractors who have significantly contributed to the agency’s efforts to contract with small businesses, including WOSBs. The establishment of such rewards and programs at other agencies may be a simple method for agencies to increase incentives for agency officials to utilize WOSBs in their agency’s procurements. Although in 1999 SBA established a governmentwide recognition effort, the Frances Perkins Vanguard Award, to recognize federal agencies and prime contractors for their efforts in contracting and subcontracting with WOSBs, some officials said that agency-level recognition awards may provide a more immediate incentive for agency outreach to WOSBs. Some officials suggested that WOSBs could benefit by participating to a greater extent in agency mentor-protégé programs. For example, an OSDBU at NASA was very enthusiastic about NASA’s achievements using the mentor-protégé program for assisting WOSBs. Furthermore, the NWBC study on best practices mentioned earlier describes how some agencies have established mentor-protégé programs to develop and increase the number of WOSBs as subcontractors. The study points out that under these programs, both mentors and protégés benefit when agencies provide financial incentives to prime contractors to help small businesses, including eligible WOSBs, and enhance their technical capabilities for participation as subcontractors and suppliers for government and commercial contracts. According to the study, at some agencies the mentor-protégé program provides incentives for mentors to establish and implement a developmental assistance plan to enable the protégé company to compete more successfully for prime and subcontract awards. The study stated that potential benefits to protégé companies include technical advice, market access, credibility, financial support, and the possibility of partnering with other businesses to enable them to better compete for federal contracts. The recent executive order on increasing opportunities for WOSBs lists the implementation of mentor-protégé programs that include WOSBs among the steps agencies should take to maximize WOSBs’ participation in the procurement process. Recently, section 807 of the National Defense Authorization Act for Fiscal Year 2001 made WOSBs eligible for participation as protégé firms under DOD’s statutory mentor-protégé program. The suggested inclusion of WOSBs in agencies’ mentor-protégé programs thus appears to be under way. Some officials suggested that WOSBs could benefit from being made aware that teaming with other small businesses (and in some cases, large firms) could enhance their competitiveness for certain procurements. For example, DOD officials said that teaming was an excellent tool that DOD uses to increase contracting with WOSBs. In addition, under the recent executive order, SBA is required to offer a program of training development seminars and conferences to instruct women business owners on how to participate in SBA’s 8(a) program, the SDB program, the HUBZone program, and other small business contracting programs for which they may be eligible. Since these programs allow teaming arrangements, the training development seminars and conferences the executive order requires SBA to provide to WOSBs could include information or instruction on such teaming arrangements. For example, a qualified WOSB may enter into a joint venture with a qualified HUBZone small business for the purpose of performing a specific HUBZone contract so long as each business is small under the applicable size standard and the procurement exceeds a certain value. For competitive 8(a) program procurements, a WOSB may enter into a joint venture or teaming arrangement with at least one 8(a) participant without regard to small business size standards if certain conditions are met. Clearly, WOSBs could benefit from outreach efforts that include information or instruction on participation by WOSBs in teaming arrangements or joint ventures for procurements where such arrangements are permitted. To address the problem of limited access to working capital experienced by many WOSBs (and other small businesses), contracting officials suggested that expanded contract financing could be helpful, such as through advance payments and higher rates of progress payments under government contracts. This idea is consistent with NWBC’s recommendations. NWBC has suggested that by increasing the customary progress payment rate for WOSBs to 95-percent and lowering the threshold for inclusion of customary progress payments in contracts with WOSBs to the lower threshold ($50,000) that DOD uses for SDBs, more working capital might flow to WOSBs. Federal procurement officials frequently mentioned that the WOSB contracting goals established for individual agencies are unrealistic: that is, they are established without regard to the capability and availability of WOSBs in specific industries from which federal agencies procure their goods and services. Officials in contracting agencies and SBA agreed that the goals are not based on an analysis of the presence, capability, or interest of WOSBs in the business sectors or industries in which government agencies make most of their purchases. According to a number of contracting officials, it makes no difference how hard contracting officers try to meet a goal if WOSBs are not in a specific location or industry where an agency procurement is to be made. If the WOSBs in the industry are not capable of performance, do not want the contract, or do not respond to contract solicitations, the agency cannot award the contract to a WOSB. Ten of the 30 contracting officers with whom we spoke stated they specifically had experienced difficulties identifying WOSBs for certain contract solicitations. They said this was particularly a problem for those contracts with specialized requirements, such as those for weapons systems or highly technical services. Contracting officers at NASA and DOE told us that for many of their procurements for highly technical research and development projects, they often have difficulty identifying qualified WOSBs. These contracting officials further said that they believe the credibility of the WOSB contracting program could be enhanced if the individual agency goals better reflected the number of WOSBs in locations and industries where government agencies purchase goods and services. SBA officials have recognized the absence, but also the potential usefulness, of information on the presence of WOSBs in various industries when establishing individual agency goals. In our discussions with SBA officials, they expressed an interest in analyzing new data just becoming available from the 1997 business census. However, they said that they have lacked sufficient resources to perform the breadth of analysis needed. They cited the complexity of the effort undertaken by experienced census staff who had completed a similar analysis for the Department of Commerce; that study sought to identify industries in which disadvantaged businesses were underrepresented in federal procurement. In commenting on a draft of this report, SBA said that it agreed that a disciplined study of WOSBs in different industries must be performed and, in accordance with Executive Order 13157 and applicable laws, this study will be done. PEC officials told us that included in PEC’s strategic plan for fiscal years 2001 through 2005 is an objective to improve those goal-setting processes and achievement measures for contracting with small businesses that align agency missions with procurement and socioeconomic goals. They believe such an effort, which would yield greater consensus in the federal community on the purpose and intended outcome of small business contracting programs, could improve the effectiveness of these measures. A representative from NAWBO with whom we spoke questioned whether federal agency officials always make concerted efforts to find WOSBs to help meet their goals or whether they find it easier to go with previous suppliers. A representative from the WBENC told us that she believes some agencies fail to reach their contracting goals for WOSBs because of a lack of effort on the part of federal contracting officials—not because qualified WOSBs are unavailable. A representative from WBC told us that she agrees with these views and added that there may not be enough WOSBs in certain locations and industries where some agencies have specialized requirements. On the other hand, representatives of several of these organizations indicated they view proposals to reexamine the validity of WOSB contracting goals as largely an effort to lower goals rather than to increase contract awards to WOSBs. As the FASA conference report recognized, reaching the 5-percent governmentwide goals for WOSBs will take time: Even though federal contracting with WOSBs increased at a faster rate than overall federal contracting during the past 4 years, limited progress has been made in achieving these governmentwide goals. Furthermore, the apparent lack of correlation between individual federal agencies’ success in increasing their contracting with WOSBs and in meeting their WOSB contracting goals makes it difficult to tell which agencies are making progress and which strategies are most effective. Moreover, even if a number of agencies reach their goals, there appears to be little likelihood that the 5-percent governmentwide goals can be met until DOD—with nearly two-thirds of all federal procurements—comes closer to reaching its goals. Many of the initiatives suggested by contracting officials for increasing contracting with WOSBs merit further examination−such as mentor- protégé programs, teaming, expanded contract financing, and more focused and coordinated outreach activities. Important issues have also been raised about how reductions in the acquisition workforce may be affecting the oversight of subcontracting plans or affecting contracting strategies—by, for example, increasing agencies’ use of consolidated contracts or other vehicles, that may be disadvantageous to all small businesses. However, analyzing the benefits and effects of the various suggestions we received from federal contracting officials or women business-owner organizations for increasing the number of federal contracts awarded to WOSBs was beyond the scope of our review. At the same time, a number of these suggestions are covered by the recent executive order assigning a leadership role to the head of SBA’s Office of Federal Contract Assistance for Women Business Owners to expand contracting opportunities for WOSBs. As a result, while the congressional mandate for this study called for us to make recommendations we consider appropriate for actions that might increase federal contract awards to WOSBs, we conclude that further analysis is required before any particular strategies can be endorsed. There was wide consensus among contracting officials about the value of developing an analytical foundation for agency goals, even though this was not directly a strategy to increase contracting with WOSBs. They noted that more realistic goals could improve the credibility of the program and improve the feasibility of developing stricter accountability for achieving the goals. PEC officials report a related effort under way to develop more complex and meaningful indicators of the success of socioeconomic procurement programs. They believe such an effort, which would yield greater consensus in the federal community of the underlying purpose and intended outcome of small business contracting programs, could improve the effectiveness of these measures. The Small Business Reauthorization Act of 2000 requires SBA to conduct a study to identify industries in which WOSBs are underrepresented in federal contracting. While such a study is essential for identifying industries eligible for the newly authorized WOSB contracting program, its analysis could also be more broadly useful for improving the realism of and then improving the accountability for agency WOSB contracting goals. Without information on the representation of WOSBs in the industries in which federal agencies procure goods and services, SBA has not been able to ensure that its agency goals represent “the maximum practicable opportunity” for participation in agencies’ contracts or “realistically reflect the potential of small business to perform such contracts,” as called for in the Small Business Act, while at the same time ensuring that the cumulative goals for all agencies meet or exceed the governmentwide goal. The analysis of the representation of WOSBs in various industries required by the new legislation could provide a foundation for SBA to establish more realistic agency WOSB contracting goals and thus provide a solid basis for holding agencies accountable for achieving those goals. Determining more realistic goals for DOD, in particular, would provide an objective basis for reconciling the conflicting claims of contracting officials and women business-owner organizations over whether WOSBs are unavailable in various sectors or federal contracting officials are not trying hard enough to identify and work effectively with WOSBs. Before the new contracting program for WOSBs was authorized, SBA officials had questioned their capacity to perform the breadth of analysis needed to determine the representation of WOSBs in various sectors. Given that the implementation of the new WOSB contracting program depends on a wide range of actions by SBA and that SBA has concerns about its capacity to complete the required steps, the Congress may benefit from being kept informed as SBA develops its strategy for responding to the new legislative requirements. We recommend that the Administrator, SBA, direct the new Office of Federal Contracting Assistance for Women Business Owners to evaluate the benefits and effects of the suggestions for increasing federal contracting with WOSBs that surfaced in this review. These include the implementation of agency mentor-protégé programs that include WOSBs, of measures to facilitate teaming arrangements and expand contract financing, as well as the exploration of initiatives to consolidate and improve the efficiency and effectiveness of outreach efforts. Another issue worthy of further study is the extent to which pressures on the acquisition workforce may be contributing to contracting practices that reduce opportunities for small businesses. These actions are consistent with the broad authority as well as specific direction given to SBA under the recent executive order. Any of the suggestions deemed feasible should be considered for implementation. In addition, we recommend that the Administrator, SBA, include in SBA’s mandated study of industries in which WOSBs are underrepresented sufficient analysis to establish more realistic agency-specific annual goals for prime and subcontracts with WOSBs. Given DOD’s predominance in government contracting, we believe that SBA would benefit from the active collaboration and support of DOD in performing the study. SBA should also keep the Congress informed as it develops a strategy for implementing the new provisions designed to expand federal contracting with WOSBs. In particular, SBA should notify its authorizing and appropriations committees if it determines that its capacity to implement the key provisions of the legislation will be impaired by insurmountable resource constraints. We requested comments on a draft of this report from the Administrator, SBA, and officials from OMB/OFPP, as well as from PEC and the Council of Offices of Small and Disadvantaged Business Utilization. We also requested comments from the four major contracting agencies from which we surveyed contracting officials—DOD, NASA, DOE and GSA. Furthermore, we requested comments from representatives of the four women’s business organizations that we had discussions with during the course of our review. We received responses from both PEC and the OSDBU Council and each of the government agencies except DOE. Each agency, except GSA, which had no general comments, said it generally agreed with our report and supported our recommendation. Each of the four women’s business organizations provided us with comments. Three generally agreed with the report and said that it provided a generally accurate overview of the federal procurement environment facing WOSBs. One—NWBC—raised a number of concerns about the methodology of our study and our recommendation in the draft report. We received written comments from the SBA Administrator. She stated that SBA appreciated our thorough investigation and the care we had taken in framing the issues and possible solutions involved in reaching higher percentages of federal contract and subcontract dollars for WOSBs. Furthermore, she stated that SBA generally agrees with the recommendation in the report that SBA complete a disciplined study of the percentages and numbers of WOSBs in different industries. She said that in accordance with Executive Order 13157 and the applicable laws, this would be done by the new Office of Federal Contract Assistance for Women Business Owners in SBA’s Office of Government Contracting. Additionally, she stated that, as suggested in our report, SBA is currently assessing its capacity to implement the WOSB provisions of the recently passed Small Business Reauthorization Act and will notify appropriate congressional committees if additional resources are required. The head of SBA’s Office of Government Contracting said that the agency generally agreed with our recommendation for further analysis of the benefits and effects of the suggestions we received from federal contracting officials for increasing contracting with WOSBs. However, she reiterated a concern about whether resources would be available. The SBA Administrator additionally provided us with several technical comments that we incorporated into the final report as appropriate. Appendix III contains the written comments we received from the SBA Administrator. We received oral comments from the Associate Administrator for Procurement Law, Legislation and Implementation, at OMB/OFPP. She said that OFPP generally concurred with the report and provided us with several technical comments, which we incorporated into the final report as appropriate. We received oral comments from the co-chair and several members of the PEC socioeconomic committee, reflecting comments collected from additional members of that committee. They indicated that the report did a very good job reflecting the contracting environment in agencies. They strongly supported the recommendation to improve the analytical foundation for annual agency goals but noted that more complex measures of success besides contracting “share” were needed. They noted their efforts to form an interagency group to better capture the success of various agencies’ efforts. PEC officials also noted that while the report captured the complexity of the diverse small business contracting programs, the draft did not reflect several required sources of government supply and services, such as Federal Prison Industries and the Committee for Purchase from People Who Are Severely Blind or Severely Disabled, that must be considered before awarding contracts to small businesses. We added a reference to these programs. PEC officials also indicated that there were opportunities to reduce the overlap and improve the efficiency of agencies’ outreach efforts. They believed agencies could collaborate and reduce the multiplicity of outreach conferences, thereby saving costs and time for both government agencies and targeted small businesses. We incorporated these observations into the discussion of both barriers and suggestions for increasing contracting with WOSBs. Finally, PEC officials emphasized that, notwithstanding the potential noted above for some improvements in the efficiency of outreach efforts, attempts to increase contracting within various small business contracting programs is largely a zero-sum problem—that is, increased efforts in one area are likely to adversely affect other programs. They noted that the workload pressures on contracting officers are contributing further to the use of contracting practices that some have alleged limit opportunities for small businesses to compete for federal contracts. We expanded our discussion of these issues and added an observation about the value of overseeing this trend and its possible adverse effects on WOSBs and other small business groups. We also received oral comments from the co-chair of the OSDBU Council, who represented the views of a number of their members. The Council members were generally positive about the report and supported the recommendation but expressed concern that the significant efforts and successes of various agencies were not reflected in the report. They believed the draft overemphasized the achievement of goals, particularly noting that we described how no agency had met both its prime and subcontract goals over the 4-year period examined. They stated that this focus failed to recognize that some agencies set “stretch” goals and that some agencies have continually achieved or exceeded procurement levels for WOSBs above the 5-percent governmentwide goal. We retained our observations about the achievement of agency goals since individual agency goals (whether above or below 5 percent) are key to the achievement of the governmentwide goal. However, we did incorporate a reference to the one agency that has continually achieved or exceeded the 5-percent governmentwide goal. OSDBU Council members also provided technical comments that we incorporated into the report as appropriate. We received written comments from DOD and NASA. DOD’s Under Secretary of Acquisition Technology and Logistics provided written comments stating that DOD generally agreed with the facts in the report and supports our recommendation to SBA, including that SBA officials seek the active collaboration and support of DOD officials in the study. Separately, DOD provided us with several technical comments, which we incorporated into the final report as appropriate. DOD’s letter to us appears in appendix IV. A NASA Associate Deputy Administrator provided us with written comments stating that NASA agreed with our recommendation to SBA. He noted that when establishing the 5-percent goal for WOSBs, FASA specified that the goal be no less than 5 percent of the total value of all prime and subcontract awards for each fiscal year and it was not until November 1999 that OFPP interpreted the requirement to mean that there were separate 5- percent goals for prime and subcontracts. He recommended that we clarify our report on this point. We agree and clarified our report accordingly. NASA’s letter to us appears in appendix V. We received oral comments from the executive director of WBC, the president of WBENC, and both the current and immediate past president of the Dallas-Ft. Worth Chapter of NAWBO. Each of these officials had positive comments about the report and had no substantive or technical disagreements with its content. We received written comments from the deputy director, NWBC. She said that she did not believe the draft report was responsive to the mandate and did not support our recommendation to SBA. She also provided a number of general and technical comments on the draft report. She was particularly concerned about our interpretation of certain requirements of the legislative mandate for this report. She commented that our methodology of focusing on the views of federal officials to discern the obstacles to, and suggestions for, increasing contract awards to WOSBs was flawed and indiscriminately repeated unverified assertions by federal officials. She said that we should have talked with women-owned businesses in the contracting arena and incorporated those discussions into the report together with the comments we received from federal employees involved in the federal procurement system. We disagree and believe the mandate clearly specifies that we solicit views from federal employees involved in the federal procurement system about their experiences pertaining to obstacles and suggestions for increasing the number of contracts awarded to WOSBs. In addition, we did contact representatives of four women- owned business organizations and solicited their views on any obstacles to and suggestions for increasing federal contracts with WOSBs, although this effort was not called for in the mandate. The deputy director of NWBC specifically said she disagreed with our recommendation that SBA conduct a study to improve the analytical basis of agency WOSB contracting goals and believes it is inconsistent with the mandate calling for GAO recommendations to “increase contract awards” to WOSBs. We agree that this recommendation is not strictly a means to increase awards to WOSBs. However, we found the evidence persuasive that establishing an analytical foundation for the goals was relevant to the credibility and performance of the program. Accordingly, we retained this recommendation. We believe more realistic agency-specific WOSB contracting goals can improve the integrity of this program, can provide a solid basis for holding agencies accountable for achieving those goals, and over time could contribute to increases in agency contracts with WOSBs. In her view, the mandate required us to make recommendations to increase such awards. The mandate does state that we are to make recommendations we consider appropriate after taking into consideration any suggestions we received during our discussions with federal contracting officials. However, an analysis of the benefits and effects of the various suggestions put forth was not within the scope of our review, so we cannot endorse any particular strategy. Nonetheless, considering the breadth of consensus on the potential merit of various measures, we added a discussion of some of the promising suggestions for increasing awards to WOSBs and included a recommendation for SBA to further study their potential. Finally, we incorporated as appropriate some of her general and technical comments. Her complete written comments are included in appendix VI. We performed our work from May 2000 through January 2001 in accordance with generally accepted government auditing standards. For details about our scope and methodology for this study, see appendix I. We are sending copies of this report to the Honorable Ted Stevens, Chairman, and the Honorable Robert C. Byrd, Ranking Member, Senate Committee on Appropriations; the Honorable C. W. Bill Young, Chairman, and the Honorable David R. Obey, Ranking Minority Member, House Committee on Appropriations; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Spencer Abraham, Secretary of Energy; the Honorable Fred P. Hochberg, Acting Administrator of SBA; the Honorable Thurman M. Davis, Sr., Acting Administrator of GSA; the Honorable Daniel S. Goldin, Administrator of NASA; and the Honorable Mitchell Daniels, Director of OMB. We will also make copies available to others on request. If you have any questions about this report, please call me on (202) 512- 2834. Key contributors to this report are acknowledged in appendix VII. Our objectives were to (1) review the federal procurement system for the 3 preceding fiscal years (1997 through 1999) to identify any trends in federal contracting with respect to women-owned small businesses (WOSBs), (2) solicit from federal employees involved in the federal procurement system any suggestions for increasing the number of federal contracts awarded to WOSBs, (3) report to the Congress on (a) any suggestions for increasing the number of federal contracts awarded to WOSBs that we consider appropriate after considering suggestions we received from the federal employees solicited per requirement 2 above, including any such means that incorporate the concepts of teaming and partnering, and (b) any barriers to the receipt of federal contracts by WOSBs and other small businesses that are created by legal or regulatory procurement requirements or practices, and (4) identify concerns of federal contracting officials about agencies’ ability to meet their WOSB contracting goals. To meet our first objective, we interviewed staff from the Federal Procurement Data Center (FPDC) and the Small Business Administration (SBA) to determine the availability of federal procurement data pertaining specifically to WOSBs. We acquired data from FPDC on overall federal procurement expenditures and procurement expenditures on small businesses, small disadvantaged businesses (SDBs), 8(a) businesses, and WOSBs for fiscal years 1996 (the first year the implementation of the 5- percent WOSB governmentwide goal was in effect) through 1999. We acquired data from SBA on federal agencies’ prime contracting and subcontracting WOSB goals and the agencies’ related annual contract awards for fiscal years 1996 through 1999. We analyzed the FPDC data to determine the overall trends of federal procurement expenditures and the trends of federal contracting with WOSBs for the 4 fiscal years. We analyzed the SBA data to determine the percentage of agencies’ contracts awarded to WOSBs each year, the number of agencies meeting their WOSB goal, and the small business programs that WOSBs used to receive their contract awards. To meet our other three objectives, we selected four agencies that accounted for over 80 percent of all federal contracting expenditures for fiscal years 1997 through1999: Department of Defense (DOD), Department of Energy (DOE), National Aeronautics and Space Administration (NASA), and General Services Administration (GSA). We contacted procurement officials, agency Small Business Office representatives, and 30 contracting officers from these agencies to discuss any obstacles (legal or regulatory) they encountered in awarding contracts to WOSBs and their suggestions for increasing the number of contracts awarded to WOSBs, and meeting agencies’ annual WOSB contracting goals. We conducted focused interviews and group discussions with officials from the Procurement Executive Council (PEC) and numerous agencies’ Office of Small and Disadvantaged Business Utilization (OSDBU). We contacted officials from the Office of Federal Procurement Policy (OFPP), Office of Management and Budget (OMB), and SBA to discuss policy and regulations governing federal contracting with WOSBs and their suggestions for increasing contracts awarded to WOSBs and meeting WOSB contracting goals. We contacted representatives from the Office of Small Disadvantaged Business Utilization of 20 major agencies to determine why the agencies’ WOSB goals were or were not met for the fiscal years 1996 through 1999 and any obstacles to and suggestions for awarding contracts to WOSBs and meeting WOSB contracting goals. These officials were selected for the most part based on their substantial experience in the field of federal contracting. In addition, we contacted representatives of four women business-owner organizations to gain their perspective on obstacles to and suggestions for increasing federal contract awards to WOSBs. We did not validate the obstacles identified nor assess the benefits and effects of the suggestions we received from the federal contracting officials and others we interviewed. In addition, we analyzed the information gathered and applied insights from a body of related work we previously performed. Our work was performed between May 2000 through January 2001 in accordance with generally accepted government auditing standards. In addition, William R. Chatlos, Elizabeth R. Eisenstadt, Colin J. Fallon, Sherrill H. Johnson, Frederick Lyles, Dorothy M. Tejada, and Adam Vodraska made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Procurement regulations implemented in 1996 mandated that women-owned small businesses (WOSB) receive five percent of governmentwide contracts. Although the number of contracts awarded to WOSBs has risen more than four times as fast as other federal contracting efforts since 1996, the goal of awarding five percent of federal contracts to WOSBs has not been met. More agencies succeeded in meeting the WOSB subcontracting goal than their prime contracting goal. GAO found that three federal agencies--the Department of Veterans Affairs, the Department of State, and the National Aeronautics and Space Administration--met or exceeded both goals in three of the four years it studied. Because the Department of Defense, which accounted for 64 percent of federal procurement in 1999, did not come close to achieving its five-percent goal, the governmentwide goal for prime contracting with WOSBs could not have been met even if all other federal agencies reached their prime contracting goals. Government officials cited many obstacles to increasing federal contracting with WOSBs, including a reduced contracting personnel force and the absense of a targeted WOSB government program. These contracting officers offered suggestions for increasing WOSB participation in federal contracting, including a strengthened outreach program and expanded excess to contract financing.
Economic growth is one of the indicators by which the well-being of the nation is typically measured, although recent discussions have focused on a broader set of indicators, such as poverty. Poverty in the United States is officially measured by the Census Bureau, which calculates the number of persons or households living below an established level of income deemed minimally adequate to support them. The federal government has a long- standing history of assisting individuals and families living in poverty by providing services and income transfers through numerous and various types of programs. Economic growth is typically defined as the increase in the value of goods and services produced by an economy; traditionally this growth has been measured by the percentage rate of increase in a country’s gross domestic product, or GDP. The growth in GDP is a key measure by which policy- makers estimate how well the economy is doing. However, it provides little information about how well individuals and households are faring. Recently there has been a substantial amount of activity in the United States and elsewhere to develop a comprehensive set of key indicators for communities, states, and the nation that go beyond traditional economic measures. Many believe that such a system would better inform individuals, groups, and institutions on the nation as a whole. Poverty is one of these key indicators. Poverty, both narrowly and more broadly defined, is a characteristic of society that is frequently monitored and defined and measured in a number of ways. The Census Bureau is responsible for establishing a poverty threshold amount each year; persons or families having income below this amount are, for statistical purposes, considered to be living in poverty. The threshold reflects estimates of the amount of money individuals and families of various sizes need to purchase goods and services deemed minimally adequate based on 1960s living standards, and is adjusted each year using the consumer price index. The poverty rate is the percentage of individuals in total or as part of various subgroups in the United States who are living on income below the threshold amounts. Over the years, experts have debated whether or not the way in which the poverty threshold is calculated should be changed. Currently the calculation only accounts for pretax income and does not include noncash benefits and tax transfers, which, especially in recent years, have comprised larger portions of the assistance package to those who are low- income. For example, food stamps and the Earned Income Tax Credit could provide a combined amount of assistance worth an estimated $5,000 for working adults with children who earn approximately $12,000 a year.If noncash benefits were included in a calculation of the poverty threshold, the number and percentage of individuals at or below the poverty line could change. In 1995, a National Academy of Sciences (NAS) panel recommended that changes be made to the threshold to count noncash benefits, tax credits, and taxes; deduct certain expenses from income such as child care and transportation; and adjust income levels according to an area’s cost of living. In response, the Census Bureau published an experimental poverty measure in 1999 using the NAS recommendations in addition to its traditional measure but, to date, Census has not changed the official measure. In 2005, close to 13 percent of the total U.S. population—about 37 million people—were counted as living below the poverty line, a number that essentially remained unchanged from 2004. Poverty rates differ, however, by age, gender, race, and ethnicity and other factors. For example, Children: In 2005, 12.3 million children, or 17.1 percent of children under the age of 18, were counted as living in poverty. Children of color were at least three times more likely to be in poverty than those who were white: 3.7 million, or 34.2 percent of, children who were African- American and 4 million, or 27.7 percent of, children who were Hispanic lived below the poverty line compared to 4 million, or 9.5 percent of, children who were white. Racial and ethnic minorities: African-Americans and Hispanics have significantly higher rates of poverty than whites. In 2005, 24.9 percent of African-Americans (9.2 million) and 22 percent of Hispanics (9.4 million) lived in poverty, compared to 8.3 percent for whites (16.2 million). Elderly: The elderly have lower rates of poverty than other groups. For example, 10.1 percent of adults (3.6 million) aged 65 or older lived in poverty. Poverty rates also differ depending on geographical location and for urban and nonurban areas. Poverty rates for urban areas were double those in suburbs, 17 percent compared to 9.3 percent. Poverty rates in the South were the highest at 14 percent; the West had a rate of 12.6 percent, followed by the Midwest with 11.4 percent and the Northeast at 11.3 percent. The U.S. government has a long history of efforts to improve the conditions of those living with severely limited resources and income. Presidents, Congress, and other policymakers have actively sought to help citizens who were poor, beginning as early as the 1850s through the more recent efforts established through welfare reform initiatives enacted in 1996. Over the years, the policy approaches used to help low-income individuals and families have varied. For example, in the1960s federal programs focused on increasing the education and training of those living in poverty. In the 1970s, policy reflected a more income-oriented approach with the introduction of several comprehensive federal assistance plans. More recently, welfare reform efforts have emphasized the role of individual responsibility and behaviors in areas such as family formation and work to assist people in becoming self-sufficient. Although alleviating poverty and the conditions associated with it has long been a federal priority, approaches to developing effective interventions have sometimes been controversial, as evidenced by the diversity of federal programs in existence and the ways in which they have evolved over time. Currently, the federal government, often in partnership with the states, has created an array of programs to assist low-income individuals and families. According to a recent study by the Congressional Research Service (CRS), the federal government spent over $400 billion on 84 programs in 2004 that provided cash and noncash benefits to individuals and families with limited income. These programs cover a broad array of services: Examples include income supports or transfers such as the Earned Income Tax Credit and TANF; work supports such as subsidized child care and job training; health supports and insurance through programs like the State Children’s Health Insurance Program (SCHIP) and Medicaid; and other social services such as food, housing, and utility assistance. Table 1 provides a list of examples of selected programs. Economic research suggests that individuals living in poverty face an increased risk for adverse outcomes, such as poor health, criminal activity, and low participation in the workforce. The adverse outcomes that are associated with poverty tend to limit the development of skills and abilities individuals need to contribute productively to the economy through work, and this in turn, results in low incomes. The relationship between poverty and outcomes for individuals is complex, in part because most variables, like health status, can be both a cause and a result of poverty. The direction of the causality can have important policy implications. To the extent that poor health causes poverty, and not the other way around, then alleviating poverty may not improve health. Health outcomes are worse for individuals with low incomes than for their more affluent counterparts. Lower-income individuals experience higher rates of chronic illness, disease, and disabilities, and also die younger than those who have higher incomes. As reported by the National Center on Health Statistics, individuals living in poverty are more likely than their affluent counterparts to experience fair or poor health, or suffer from conditions that limit their everyday activities (fig.1). They also report higher rates of chronic conditions such as hypertension, high blood pressure, and elevated serum cholesterol, which can be predictors of more acute conditions in the future. Life expectancies for individuals in poor families as compared to nonpoor families also differ significantly. One study showed that individuals with low incomes had life expectancies 25 percent lower than those with higher incomes. Other research suggests that an individual’s household wealth predicts the amount of functionality of that individual in retirement. Research suggests that part of the reason that those in poverty have poor health outcomes is that they have less access to health insurance and thus less access to health care, particularly preventive care, than others who are nonpoor. Very low-income individuals were three times as likely not to have health insurance than those with higher incomes, which may lead to reduced access to and utilization of health care (fig. 2). Data show that those who are poor with no health insurance access the health system less often than those who are either insured or wealthier when measured by one indicator of health care access: visits to the doctor. For example, data from the National Center on Health Statistics show that children in families with income below the poverty line who were continuously without health insurance were three to four times more likely to have not visited a doctor in the last 12 months than children in similar economic circumstances who were insured (fig. 3). Research also suggests that a link between income and health exists independent of health insurance coverage. Figure 3 also shows that while children who are uninsured but in wealthier families visit the doctor fewer times than those who are insured, they still go more often than children who are uninsured but living in poverty. Some research examining government health insurance suggests that increased health insurance availability improves health outcomes. Economists have studied the expansion of Medicaid, which provides health insurance to those with low income. They found that Medicaid’s expansion of coverage, which occurred between 1979 and 1992, increased the availability of insurance and improved children’s health outcomes. For example, one study found that a 30 percentage point increase in eligibility for mothers aged 15-44 translated into a decrease in infant mortality of 8.5 percent. Another study looked at the impact of health insurance coverage through Medicare and its effects on the health of the elderly and also found a statistically significant though modest impact. There is some evidence that variations in health insurance coverage do not explain all the differences in health outcomes. A study done in Canada found improvements in children’s health with increases in income, even though Canada offers universal health insurance coverage for hospital services, indicating that health insurance is only part of the story. Although there is a connection among poverty, having health insurance, and health outcomes, having health insurance is often associated with other attributes of an individual, thus making it difficult to isolate the direct effect of health insurance alone. Most individuals in the United States are either self-insured or insured through their employer. If those who are uninsured have lower levels of education, as do individuals with low income, differences in health between the insured and uninsured might be due to level or quality of education, and not necessarily insurance. Another reason that individuals living in poverty may have more negative health outcomes is because they live and work in areas that expose them to environmental hazards such as pollution or substandard housing. Some researchers have found that because poorer neighborhoods may be located closer to industrial areas or highways than more affluent neighborhoods, there tend to be higher levels of pollution in lower-income neighborhoods. The Institute of Medicine concluded that minority and low-income communities had disproportionately higher exposure to environmental hazards than the general population, and because of their impoverished conditions were less able to effectively change these conditions. The link between poverty and health outcomes may also be explained by lifestyle issues associated with poverty. Sedentary life-style: the use of alcohol and drugs; as well as lower consumption of fiber, fresh fruits, and vegetables are some of the behaviors that have been associated with lower socioeconomic status. Cigarette smoking is also more common among adults who live below the poverty line than among those above it, about 30 percent compared to 21 percent. Similarly, problems with being overweight and obese are common among those with low family incomes, although most prevalent in women: Women with incomes below 130 percent of the poverty line were 50 percent more likely to be obese than those with incomes above this amount. Figure 4 shows that people living in poverty are less likely to engage in regular, leisure-time physical activity than others and are somewhat more likely to be obese, and children in poverty are somewhat more likely to be overweight than children living above the poverty line. In addition, there is also evidence to suggest a link among poverty, stress, and adverse health outcomes, such as compromised immune systems. While evidence shows how poverty could result in poor health, the opposite could also be true. For example, a health condition could result, over time, in restricting an individual’s employment, resulting in lower income. Additionally, the relationship between poverty and health outcomes could also vary by demographic group. Failing health, for example, can be more directly associated with household income for middle-aged and older individuals than with children, since adults are typically the ones who work. Just as research has established a link between poverty and adverse health outcomes, evidence suggests a link between poverty and crime. Economic theory predicts that low wages or unemployment makes crime more attractive, even with the risks of arrest and incarceration, because of lower returns to an individual through legal activities. While more mixed, empirical research provides support for this. For example, one study shows that higher levels of unemployment are associated with higher levels of property crime, but is less conclusive in predicting violent crime.Another study has shown that both wages and unemployment affect crime, but that wages play a larger role. Research has found that peer influence and neighborhood effects may also lead to increased criminal behavior by residents. Having many peers that engage in negative behavior may reduce social stigma surrounding that behavior. In addition, increased crime in an area may decrease the chances that any particular criminal activity will result in an arrest. Other research suggests that the neighborhood itself, independent of the characteristics of the individuals who live in it, affects criminal behavior.One study found that arrest rates were lower among young people from low-income families who were given a voucher to live in a low-poverty neighborhood, as opposed to their peers who stayed in high-poverty neighborhoods. The most notable decrease was in arrests for violent crimes; the results for property crimes, however, were mixed, with arrest rates increasing for males and decreasing for females. Regardless of whether poverty is a cause or an effect, the conditions associated with poverty limit the ability of low-income individuals to develop the skills, abilities, knowledge, and habits necessary to fully participate in the labor force, in turn, leads to lower incomes. According to 2000 Census data, people aged 20-64 with income above the poverty line in 1999 were almost twice as likely to be employed as compared to those with incomes below it. Some of the reasons for these outcomes include educational attainment and health status. Poverty is associated with lower educational quality and attainment, both of which can affect labor market outcomes. Research has consistently demonstrated that the quality and level of education attained by lower- income children is substantially below those for children from middle- or upper-income families. Moreover, high school dropout rates in 2004 were four times higher for students from low-income families than those in high-income families. Those with less than a high school degree have unemployment rates almost three times greater than those with a college degree, 7.6 percent compared to 2.6 percent in 2005. And the percentage of low-income students who attend college immediately after high school is significantly lower than for their wealthier counterparts: 49 percent compared to 78 percent. A significant body of economic research directly links adverse health outcomes, which are also associated with low incomes, with the quality and quantity of labor that the individual is able to offer to the workforce. Many studies that have examined the relationship among individual adult health and wages, labor force participation, and job choice have documented positive empirical relationships among health and wages, earnings, and hours of work. Although there is no consensus about the exact magnitude of the effects, the empirical literature suggests that poor health reduces the capacity to work and has substantive effects on wages, labor force participation, and job choice, meaning that poor health is associated with low income. Research also demonstrates that poor childhood health has substantial effects on children’s future outcomes as adults. Some research, for example, shows that low birth weight is correlated with a low health status later in life. Research also suggests that poor childhood health is associated with reduced educational attainment and reduced cognitive development. Reduced educational attainment may in turn have a causal effect not only on future wages as discussed above but also on adult health if the more educated are better able to process health information or make more informed choices about their health care or if education makes people more “future oriented” by helping them think about the consequences of their choices. In addition, some research shows that poor childhood health is predictive of poor adult health and poor adult economic status in middle age, even after controlling for educational attainment. The economic literature suggests that poverty not only affects individuals but can also create larger challenges for economic growth. Traditionally, research has focused on the importance of economic growth for generating rising living standards and alleviating poverty, but more recently it has examined the reverse, the impact of poverty on economic growth. In the United States, poverty can impact economic growth by affecting the accumulation of human capital and rates of crime and social unrest. While the empirical research is limited, it points to the negative association between poverty and economic growth consistent with the theoretical literature’s conclusion that higher rates of poverty can result in lower rates of growth. Research has shown that accumulation of human capital is one of the fundamental drivers of economic growth. Human capital consists of the skills, abilities, talents, and knowledge of individuals as used in employment. The accumulation of human capital is generally held to be a function of the education level, work experience, training, and healthiness of the workforce. Therefore, schooling at the secondary and higher levels is a key component for building an educated labor force that is better at learning, creating, and implementing new technologies. Health is also an important component of human capital, as it can enhance workers’ productivity by increasing their physical capacities, such as strength and endurance, as well as mental capacities, such as cognitive functioning and reasoning ability. Improved health increases workforce productivity by reducing incapacity, disability, and the number of days lost to sick leave, and increasing the opportunities to accumulate work experience. Further, good health helps improve education by increasing levels of schooling and scholastic performance. The accumulation of human capital can be diminished when significant portions of the population have experienced long periods of poverty, or were living in poverty at a critical developmental juncture. For example, recent research has found that the distinct slowdown in some measures of human capital development is most heavily concentrated among youth from impoverished backgrounds. When individuals who have experienced poverty enter the workforce, their contributions may be restricted or minimal, while others may not enter the workforce in a significant way. Not only is the productive capability of some citizens lost, but their purchasing power and savings, which could be channeled into productive investments, is forgone as well. In addition to the effects of poverty on human capital, some economic literature suggests that poverty can affect economic growth to the extent that it is associated with crime, violence, and social unrest. According to some theories, when citizens engage in unproductive criminal activities they deter others from making productive investments or their actions force others to divert resources toward defensive activities and expenditures. The increased risk due to insecurity can unfavorably affect investment decisions—and hence economic growth—in areas afflicted by concentrated poverty. Although such theories link poverty to human capital deficiencies and criminal activity, the magnitude of their impact on economic growth for an economy such as the United States is unclear at this time. In addition, people living in impoverished conditions generate budgetary costs for the federal government, which spends billions of dollars on programs to assist low-income individuals and families. Alleviating these conditions would allow the federal government to redirect these resources toward other purposes. While economic theory provides a guide to understanding how poverty might compromise economic growth, empirical researchers have not as extensively studied poverty as a determinant of growth in the United States. Empirical evidence on the United States and other rich nations is quite limited, but some recent studies support a negative association between poverty and economic growth. For example, some research finds that economic growth is slower in U.S. metropolitan areas characterized by higher rates of poverty than those with lower rates of poverty. Another study, using data from 21 wealthy countries, has found a similar negative relationship between poverty and economic growth. Maintaining and enhancing economic growth is a national priority that touches on all aspects of federal decision making. As the nation moves forward in thinking about how to address the major challenges it will face in the twenty-first century, the impact of specific policies on economic growth will factor into decisions on topics as far ranging as taxes, support for scientific and technical innovation, retirement and disability, health care, education and employment. To the extent that empirical research can shed light on the factors that affect economic growth, this information can guide policymakers in allocating resources, setting priorities, and planning strategically for our nation’s future. Economists have long recognized the strong association between poverty and a range of adverse outcomes for individuals, and empirical research, while limited, has also begun to help us better understand the impact of poverty on a nation’s economic growth. The interrelationships between poverty and various adverse social outcomes are complex, and our understanding of these relationships can lead to vastly different conclusions regarding appropriate interventions to address each specific outcome. Furthermore, any such interventions could take years, or even a generation, to yield significant and lasting results, as the greatest impacts are likely to be seen among children. Nevertheless, whatever the underlying causes of poverty may be, economic research suggests that improvements in the health, neighborhoods, education, and skills of those living in poverty could have impacts far beyond individuals and families, potentially improving the economic well-being of the nation as a whole. We provided the draft report to four outside reviewers with expertise in the areas of poverty and economic growth. The reviewers generally acknowledged that our report covers a substantial body of recent economic research on the topic and did not dispute the validity of the specific studies included in our review. However, they expressed some disagreement over our presentation of this research. Some reviewers felt that the evidence directly linking poverty to adverse outcomes is more robust than implied by our summary and directed us to additional research that bolsters the link between poverty and poor health and crime. We did not incorporate this additional research into our findings, but we reviewed it and found it consistent with the evidence already incorporated in our summary. Other reviewers felt that our report implied a stronger relationship between poverty and adverse outcomes than is supported by the research. They felt that the report did not provide adequate information on the causes of poverty and external factors that could be responsible for both poverty and adverse outcomes. In response to these comments, we made several revisions to the text to ensure that the information we presented was balanced. The reviewers also provided technical comments that we incorporated as appropriate. Copies of this report are being sent to the Departments of Commerce, Health and Human Services, Justice, and Labor; appropriate congressional committees; and other interested parties. Copies will be made available to others upon request. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (202) 512-7215 or at nilsens@gao.gov. Other contact and staff acknowledgments are listed in appendix II. Adler, Nancy E., and Katherine Newman. “Socioeconomic Disparities in Health: Pathways and Policies.” Health Affairs, Vol. 21 No. 2, 2002. Aghion, Phillipe, et al. “Inequality and Economic Growth: The Perspective of the New Growth Theories.” Journal of Economic Literature, Vol. XXXVII: 1999. Barro, Robert. “Inequality and growth in a panel of countries.” Journal of Economic Growth, 5 (1): 2000. Barsky, Robert B., et al. “Preference Parameters and Behavioral Heterogenity: An Experimental Approach in the Health and Retirement Study.” Review of Economic Statistics, 1997. Burtless, G., and C. Jenks. “American Inequality and Its Consequences.” In (eds), H. Aaron et al, Agenda for the Nation. Washington, DC: Brookings Institution Press, 2003. Card, David, and Carlos Dobkin, Nicole Maestas. “The Impact of Nearly Universal Insurance Coverage on Health Care Utilization and Health: Evidence from Medicare.” National Bureau of Economic Research, Working Paper No. 10365. Cambridge, Massachusetts: National Bureau of Economic Research: 2004. Case, Anne C., and Angus Deaton. “Broken Down by Work and Sex: How our Health Declines.” National Bureau of Economic Research, Working Paper No. 9821. Cambridge, Massachusetts: National Bureau of Economic Research: 2003. Case, Anne, Angela Fertig, and Christina Paxson. “The Lasting Impact Of Childhood Health And Circumstance.” Journal of Health Economics, 24 (2): 2005. Case, Anne, Darren Lubotsky, and Christina Paxson. “Economic Status and Health in Childhood: The Origins of the Gradient.” American Economic Review, Vol. 92, No. 5., Dec. 2002. Chay, Kenneth, and Michael Greenstone. “Air Quality, Infant Mortality, and the Clean Air Act of 1970.” National Bureau of Economic Research, Working Paper No. 10053. Cambridge, Massachusetts: National Bureau of Economic Research: 2003. Chui, W. Henry. “Income Inequality, Human Capital Accumulation and Economic Performance.” Economic Journal, 108. 1998. Currie, Janet, and Jonathan Gruber. “Saving Babies: The Efficiency and Cost of Recent Changes in the Medicaid Eligibility of Pregnant Women.” Journal of Political Economy, Vol. 104, No. 6, 1996. Currie, Janet, and Rosemary Hyson. “Is the Impact of Health Shocks Cushioned by Socioecoomic Status? The Case of Low Birthweight” American Economic Review Papers and Proceedings of the One Hundred Eleventh Annual Meeting of the American Economic Association, 89 (2): 1999. Currie, Janet, and Brigitte Madrian. “Health, Health Insurance and the Labor Market.” In (eds), O. Ashenfelter and D. Card, Handbook of Labor Economics, Vol. 3. Elsevier Science. 1999. Currie, Janet and Mark Stabile. “Socioeconomic Status and Child Health: Why is the Relationship Stronger for Older Children?” American Economic Review, Vol. 93, No 5., Dec. 2003. Currie, Janet, and Matthew Neidell. “Air Pollution and Infant Health: What Can We Learn From California’s Recent Experience?” Quarterly Journal of Economics, 120 (3), 2005. Cutler, David, Angus Deaton, and Adriana Lleras-Muney. “The Determinants of Mortality.” Journal of Economic Perspectives, Vol. 20, No. 3, 2006. DeCicca, Phillip, Donald Kenkel, Alan Mathios. “Racial Difference in the Determinants of Smoking Onset.” Journal of Risk and Uncertainty. 2000. Vol. 21, Iss. 2/3; p311. ———. “Putting Out The Fires: Will Higher Taxes Reduce the Onset of Youth Smoking?” Journal of Political Economy. 2002.Vol.110, Iss.1; p. 144. Deaton, Angus. “Policy Implications of the Gradient of Health and Wealth.” Health Affairs, Vol. 21, No.2, 2002. Delong, J. et al. “Sustaining U.S. Economic Growth.” In H. Aaron et al. (eds.), Agenda for the Nation. Washington, DC: Brookings Institution Press, 2003. Dev Bhatta, Saurav. “Are Inequality and Poverty Harmful for Economic Growth: Evidence from the Metropolitan Areas of the United States.” Journal of Urban Affairs, 23 (3&4): 2001. Fallah, B., and M. Partridge. “The Elusive Inequality-Economic Growth Relationship: Are There Differences between Cities and the Countryside?” University of Saskatchewan Working Paper, February 2006. Federal Reserve Bank of New York, “Unequal Incomes, Unequal Outcomes? Economic Inequality and Measures of Well-Being: Proceedings of a Conference Sponsored by the Federal Reserve Bank of New York,” Economic Policy Review, Vol. 5 (3), September 1999. http://www.ny.frb.org/research/epr/1999n3.html. Forbes, K. “A Reassessment of the Relationship between Inequality and Growth.” The American Economic Review, 90 (4): 2000. Freeman, Richard B. “Why Do So Many Young American Men Commit Crimes and What Might We Do About It?” Journal of Economic Perspectives, Vol. 10, No. 1, 1996. Gould, Eric D., Bruce A. Weinberg, and David B. Mustard. “Crime Rates and Local Labor Market Opportunities in the United States: 1979-1997.” The Review of Economics and Statistics, 84 (1): 2002. Grogger, Jeff. “Market Wages and Youth Crime.” Journal of Labor Economics, Vol. 16, No. 4. Chicago: 1998. Heckman, J., and A. Krueger. Inequality in America: What Role for Human Capital Policies. Cambridge, Massachusetts: The MIT Press, 2003. Ho, P. “Income Inequality and Economic Growth.” Kylos, 53 (3): 2003. Holzer, Harry, et al. “The Economic Costs of Poverty in the United States.” Unpublished working paper, 2006. Hsing, Yu. “Economic Growth And Income Inequality: The Case Of The US.” International Journal of Social Economics, 32 (7): 2005. Katz, Lawrence F., Jeffrey R. Kling, and Jeffrey B. Liebman. “Moving to Opportunity in Boston: Early Results of a Randomized Mobility Experiment.” The Quarterly Journal of Economics, May 2001. Kling, Jeffrey R., Jens Ludwig, and Lawrence F. Katz. “Neighborhood Effects on Crime for Female and Male Youth: Evidence from a Randomized Housing Voucher Experiment.” Quarterly Journal of Economics, Feb. 2005. Lochner, Lance, and Enrico Moretti. “The Effect of Education on Crime: Evidence from Prison Inmates, Arrests and Self-Reports.” American Economic Review, 2004. Ludwig, Jens, and Greg J. Duncan, Paul Hirschfield. “Urban Poverty and Juvenile Crime: Evidence from a Randomized Housing-Mobility Experiment.” Quarterly Journal of Economics, May 2001. McGarry, Kathleen. “Health and Retirement: Do Changes in Health Affect Retirement Expectations?” Journal of Human Resources, Vol. XXXIX, 2004. Mo, P. “Income Inequality and Economic Growth.” Kyklos, 53 (3): 2000. Newberger, R., and T. Riggs, “The Impact of Poverty on Location of Financial Establishments: Evidence from Across-Country Data.” Profitwise News and Views, Federal Reserve Bank of Chicago, April 2006. Panizza, Ugo. “Income Inequality and Economic Growth: Evidence from American Data.” Journal of Economic Growth, 7 (1): 2002. Partridge, Mark. “Is Inequality Harmful for Growth? Comment.” The American Economic Review, 87 (5): 1997. Persson, T., and G. Tabellini. “Is Inequality Harmful for Growth?” The American Economic Review, 84 (3): 1994. Rank, Mark. One Nation Underprivileged: Why American Poverty Affects Us All. Oxford: Oxford University Press, 2004. Raphael, Steven, and Rudolf Winter-Ebner. “Identifying the Effect of Unemployment on Crime.” Journal of Law and Economics, Vol. XLIV. 2001. Sallis, J.F., et al. “The Association of School Environments with Youth Physical Activity.” American Journal of Public Health, Vol. 91, No. 4, 2001. Sandy, Carola. “Essays on the Macroeconomic Impact of Poverty.” Columbia University Libraries, http://digitalcommons.libraries.columbia.edu/dissertations/AAI9970273, 2000. Sherman, Arloc. Wasting America’s Future: The Children’s Defense Fund Report On The Costs Of Child Poverty. Boston, Massachusetts: Beacon Press Books, 1994. Siegel, Michele J. “Measuring the Effect of Husband’s Health on Wife’s Labor Supply.” Health Economics, 15 (6): 2006. Smith, James P. “Healthy Bodies and Thick Wallets: The Dual Relation between Health and Economic Status.” Journal of Economic Perspectives, Vol. 13, No. 2, 1999. ———. “The Impact of SES on Health over the Life-Course.” Rand Working Paper Series. Rand Labor and Population: 2005. Smith, James, and Raynard Kington. “Demographic and Economic Correlates of Health in Old Age.” Demography, Vol. 34, No. 1, 1997. Teles, Vladimir. “The Role of Human Capital in Economic Growth.” Applied Economic Letters, 12: 2005. U.S. Census Bureau. Income, Poverty, and Health Insurance Coverage in the United States: 2005. Washington, D.C.: 2006 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. Health, United States, 2006. Washington, D.C.: 2006. ———. Health, United States, 1998. Washington, D.C.: 1998. U.S. Department of Housing and Urban Development. Moving to Opportunity Demonstration Data. Washington, D.C.: May, 2004. ———. Moving to Opportunity for Fair Housing. Washington, D.C.: Dec. 2000. http://www.hud.gov/progdesc/mto.cfm. Voitchovsky, S. “Does the Profile of Income Inequality Matter for Economic Growth? Distinguishing Between the Effects of Inequality in Different Parts of the Income Distribution.” Journal of Economic Growth, Vol. 10 (3): 2005. Kathy Larin, Assistant Director, and Janet Mascia, Analyst-in-Charge, managed this assignment. Lawrance Evans, Ben Bolitzer , Ken Bombara, Amanda Seese, and Rhiannon Patterson made significant contributions throughout the assignment. Charles Willson, Susannah Compton, and Patrick DiBattista helped develop the report’s message. In addition, Doug Besharov, Dr. Maria Cancian, Dr. Sheldon Danziger, and Dr. Lawrence Mead reviewed and provided comments on the report.
In 2005, 37 million people, approximately 13 percent of the total population, lived below the poverty line, as defined by the Census Bureau. Poverty imposes costs on the nation in terms of both programmatic outlays and productivity losses that can affect the economy as a whole. To better understand the potential range of effects of poverty, GAO was asked to examine (1) what the economic research tells us about the relationship between poverty and adverse social conditions, such as poor health outcomes, crime, and labor force attachment, and (2) what links economic research has found between poverty and economic growth. To answer these questions, GAO reviewed the economic literature by academic experts, think tanks, and government agencies, and reviewed additional literature by searching various databases for peer- reviewed economic journals, specialty journals, and books. We also provided our draft report for review by experts on this topic. Economic research suggests that individuals living in poverty face an increased risk of adverse outcomes, such as poor health and criminal activity, both of which may lead to reduced participation in the labor market. While the mechanisms by which poverty affects health are complex, some research suggests that adverse health outcomes can be due, in part, to limited access to health care as well as greater exposure to environmental hazards and engaging in risky behaviors. For example, some research has shown that increased availability of health insurance such as Medicaid for low-income mothers led to a decrease in infant mortality. Additionally, exposure to higher levels of air pollution from living in urban areas close to highways can lead to acute health conditions. Data suggest that engaging in risky behaviors, such as tobacco and alcohol use, a sedentary life-style, and a low consumption of nutritional foods, can account for some health disparities between lower and upper income groups. The economic research we reviewed also points to links between poverty and crime. For example, one study indicated that higher levels of unemployment are associated with higher levels of property crime. The relationship between poverty and adverse outcomes for individuals is complex, in part because most variables, like health status, can be both a cause and a result of poverty. These adverse outcomes affect individuals in many ways, including limiting their development of the skills, abilities, knowledge, and habits necessary to fully participate in the labor force. Research shows that poverty can negatively affect economic growth by affecting the accumulation of human capital and rates of crime and social unrest. Economic theory has long suggested that human capital--that is, the education, work experience, training, and health of the workforce--is considered one of the fundamental drivers of economic growth. The conditions associated with poverty can work against this human capital development by limiting individuals' ability to remain healthy and develop skills, in turn decreasing the potential to contribute talents, ideas, and even labor to the economy. An educated labor force, for example, is better at learning, creating and implementing new technologies. Economic theory suggests that when poverty affects a significant portion of the population, these effects can extend to the society at large and produce slower rates of growth. Although historically research has focused mainly on the extent to which economic growth alleviates poverty, some recent empirical studies have begun to demonstrate that higher rates of poverty are associated with lower rates of growth in the economy as a whole. For example, areas with higher poverty rates experience, on average, slower per capita income growth rates than low-poverty areas.
The A-10 is a single seat fixed-wing platform specifically designed for close air support and defeating enemy armor. According to the Air Force, this fourth generation fighter achieved its initial operational capability in 1977, but the aircraft has received many upgrades since that time, including a major modernization in 2007. The Air Force describes the A- 10 as a highly accurate and survivable weapons-delivery platform with excellent maneuverability at low air speeds and altitude, a wide combat radius, and extended loiter times. Figure 1 shows a picture of an A-10. As of April 2016, the Air Force A-10 inventory includes 283 aircraft stationed across the United States and also in South Korea, as shown in figure 2. The Air Force assigns three primary missions and two secondary missions to the A-10, which are described in table 1. The A-10 is one of a number of DOD aircraft—both manned and unmanned—that conduct the CAS mission. Besides the A-10, the Air Force currently has two other fighter aircraft that conduct the CAS mission (F-16 and F-15E) and plans to use the F-35 for this mission in the future. The Air Force also uses bombers (B-1, B-52), special operations aircraft (AC-130), and remotely-piloted aircraft (MQ-1, MQ-9) to conduct CAS. Other DOD assets used for CAS include the F/A-18 (Navy/Marine Corps), AV-8 (Marine Corps), AH-1 (Marine Corps), and AH-64 (Army). Figure 3 includes examples of CAS-capable aircraft in the Air Force and other services. Joint Terminal Attack Controllers provide ground commanders with recommendations on the use of CAS and its integration with ground operations. According to Joint Doctrine, Joint Terminal Attack Controllers are qualified (certified) servicemembers who, from a forward position, direct the action of combat aircraft engaged in CAS and other offensive air operations. FAC(A)s are also qualified to exercise control of aircraft engaged in CAS, but FAC(A)s exercise control from the air while Joint Terminal Attack Controllers typically exercise control from ground positions. In short, both are responsible for ensuring that aircraft strike the target accurately while avoiding hitting friendly troops. DOD and partner nations have Memorandums of Agreement that standardize Joint Terminal Attack Controller and FAC(A) certification and qualification requirements, including identifying minimum training and performance standards. Joint Terminal Attack Controllers and FAC(A)s are the only personnel authorized to control the maneuver of, or grant weapons release clearance to, attacking aircraft. The Air Force and DOD do not have needed information on the full implications of A-10 divestment, including the gaps that could be created by divestment and options for mitigating any potential gaps. Divestment decisions can have far-reaching consequences and should be based on quality information. The Air Force’s recent proposal to postpone full A-10 divestment until 2022 mitigates some near-term capacity gaps, but divestment may still create capacity gaps and gaps in the service’s ability to conduct missions currently carried out by the A-10. Moreover, the Air Force has not yet clearly identified gaps and resulting risks that could be created by A-10 divestment, so it is not well-positioned to determine appropriate mitigation strategies. Further, DOD may face similar decisions to divest other weapon systems before the end of their service lives in the future and does not have guidance to ensure that the services and the department overall are collecting quality information to inform these decisions. Because they can have far-reaching cost and operational consequences, major divestment decisions, like the original decisions to invest in platforms, should be based on quality information. With regard to DOD’s divestment actions that would affect military capabilities, this quality information should, among other things, clearly identify any gaps created by the action and strategies for mitigating any gaps that result from the action. The Air Force has numerous policy documents to guide investment decisions; by contrast, it does not have guidance identifying the factors it must consider before choosing to divest a major weapon system before the end of its expected service life. Although the Air Force lacks specific guidance to identify the factors it must consider before choosing to divest a major weapon system, the Air Force has guidance that recognizes that divestment decisions, like investment decisions, are actions that can have major financial and non-financial consequences for an organization and so should be carefully considered. Similarly, we were not able to find DOD guidance specifically identifying such factors. However, DOD guidance and GAO knowledge-based criteria identify key factors that, while developed for investment decisions, are applicable to making divestment decisions. One key factor is having clear requirements, which (1) provide a baseline to identify gaps and associated risks, and (2) inform decisions on how best to address the gaps. The Navy has also recognized the similarities between investment and divestment decisions, and it has issued guidance requiring that senior Navy leaders and Congress be provided specific information to support proposals to divest a vessel before the end of its expected service life. Specifically, these proposals must describe the reason for the divestment, identify any resulting capability gaps, and recommend strategies for mitigating gaps. The Air Force’s current A-10 divestment proposal delays loss in fighter capacity that would have occurred under prior proposals. If implemented, the current proposal would result in the complete divestment of the A-10 by 2022, 3 years later than proposed in the fiscal years 2015 and 2016 budget requests. The Air Force 2014 budget request anticipated retaining all 283 A-10s through at least 2035. However, Air Force leaders have recently testified that the service must start divesting the A-10 fleet after fiscal year 2017 because, without an increase in personnel and associated funding, the Air Force does not have the manpower needed to support both the A-10 and F-35 fleets. Figure 4 provides a comparison of the 2015, 2016, and 2017 divestment proposals. Changes in the current operational environment—specifically the rise of the Islamic State of Iraq and the Levant (ISIL) and Russia’s provocations—led to increased fighter aircraft demands and also affected the decision to temporarily defer A-10 divestment, according to the Air Force. This decision was made in consultation with the combatant commanders, according to Air Force testimony. Since the Air Force originally proposed divesting its A-10s, units have deployed to U.S. European Command (EUCOM), U.S. Central Command (CENTCOM), and U.S. Pacific Command. The A-10 brings useful and unique capabilities to the battlefield, according to officials from the commands. The Secretary of Defense noted that the A-10 has been devastating ISIL. Figure 5 shows A-10s returning from a deployment to EUCOM. Although the 2017 A-10 divestment proposal provides more near-term fighter capacity than the two prior proposals, implementation of this latest divestment proposal could still lead to near-term capacity gaps. According to a DOD summary of its fiscal year 2017 budget proposal, the Air Force plans to replace A-10 squadrons one-for-one with F-35 squadrons in order to mitigate the drop in fighter capacity projected under the original A-10 divestment proposal. However, Air Force documentation reveals that the loss of A-10 squadrons will outpace the F-35 squadron gain, with eight A-10 squadrons divested by the end of the 5-year budget plan but only six F-35 squadrons stood up. North Korea remains one of the most challenging security problems for the United States and its allies and partners in the region, according to DOD. DOD reports that North Korea’s large, forward-positioned military can initiate an attack against South Korea with little or no warning. In April 2015, the U.S. Forces Korea commander testified that having very little warning of a provocation was the command’s top concern. In response to questions, the commander also stated that loss of the A-10 would create a gap, primarily in the ability to defeat the North Korean armor threat. He also testified that he had been assured that, should the A-10 unit based in South Korea be divested, it would be replaced by another squadron in South Korea. However, the current Air Force proposal would divest the A-10 squadron in South Korea in fiscal year 2019 without replacement. We found that the full extent to which the divestment proposals create capacity gaps and increase risk is difficult to determine, because DOD does not have a clearly established Air Force fighter aircraft capacity requirement. However, all three A-10 divestment proposals would contribute to a decline in Air Force fighter capacity, when compared to the Air Force’s fiscal year 2014 budget plans, which called for the Air Force to maintain its A-10s through 2035. In March 2016, the Air Force began a major force structure review that will include an examination of its fighter capacity requirements, according to Air Force officials. Until it has such a baseline, the Air Force cannot determine the full extent of capacity gaps and associated risks it will incur under its current A-10 divestment proposal and the effectiveness or necessity of any mitigation strategies. Figure 6 shows the Air Force’s planned fighter and bomber inventories from 2017 through 2046. The Air Force has not comprehensively assessed potential mission capability gaps caused by A-10 divestment or the effects of divestment on its ability to support Joint Terminal Attack Controller training. Though the Air Force and DOD are taking steps to mitigate potential gaps, they have not established clear requirements for the missions that the A-10 performs, and in the absence of these requirements, have not fully identified the capability gaps and risks that could result from A-10 divestment and the effectiveness or necessity of the Air Force’s and the department’s mitigation strategies. The following sections provide summary information, based on our analysis, about the mission capabilities the A-10 and its pilots currently provide; about efforts to mitigate potential gaps that could result from A-10 divestment; and about the uncertainty of the effectiveness of mitigation efforts due to lack of quality information, such as specific mission requirements. The missions and A-10 contributions are discussed more expansively in appendix IV. Over the last 12 years, ground commanders have relied primarily on air support rather than artillery or other ground-based systems for their combat fire support, according to the Joint Staff. CAS provides ground commanders with flexible and responsive support and, under some circumstances—including airborne assaults, counter-insurgency operations, and special operations—may be the only fire support available. Though many Air Force platforms have performed CAS in the past decade, A-10 pilots are considered the Air Force’s CAS experts due to the amount and depth of their CAS training that builds up over their careers. The A-10 CAS focus, which begins at initial qualification training and extends to yearly training and advanced training, far exceeds the CAS training of other Air Force pilots. According to Air Force and combatant command officials, the CAS expertise that resides in the A-10 community is particularly important in contested environments, such as Korea, where a wider skillset is needed to effectively provide CAS. Table 2 summarizes the CAS training flight (sortie) requirements for pilots of Air Force CAS-capable fighters along with the mission priority of CAS for each aircraft type. The A-10 aircraft also has unique capabilities not replicated in other Air Force fighters such as the F-16 and F-35. CAS experts convened by the Air Force in 2015 concluded that A-10 divestiture creates a gap, because the Air Force is losing a high-capacity and cost-efficient ability to kill armor, moving, and close-proximity targets in poor weather conditions. However, CAS needs can vary considerably according to circumstances and in certain cases, different platforms have advantages over the A-10, according to Air Force officials. For example, a B-1 bomber has a longer loiter time and larger bomb capacity than the A-10, which is advantageous in some circumstances. Forward Air Controller (Airborne) (FAC(A)) pilots are CAS experts who help efficiently manage air-to-ground operations. Although largely not used during operations in Iraq and Afghanistan, FAC(A)s are invaluable during contested CAS operations because they perform reconnaissance and develop battlefield awareness under conditions where intelligence and communications will be much more limited, according to Air Force officials. FAC(A)s also play an important role in cases where there are not enough qualified Joint Terminal Attack Controllers authorized to control coalition and allied aircraft, according to Air Force officials. Though all DOD FAC(A)s are required to meet minimum training requirements for certification and qualification retention, as established in a memorandum of agreement, Air Force FAC(A) training requirements are higher for A- 10 pilots than for those of other Air Force aircraft. A-10 FAC(A)s are required to attain mission proficiency while F-16 FAC(A)s and future F-35 FAC(A)s are only required to have familiarity with the mission. Further, the A-10 community spends significantly more effort developing and retaining FAC(A) expertise. For example, A-10 FAC(A)s are required to conduct four times the yearly training sorties of F-16 FAC(A)s and almost triple those of future F-35 FAC(A)s. Moreover, A-10 pilots currently constitute approximately half of the Air Force’s FAC(A)s. According to Air Force officials, combat search and rescue is an unpredictable mission, unique from other rescue missions in that it is often done with little warning, deep into hostile territory, and requires searching for the survivor’s location. CSAR-Sandy is an important part of the overall CSAR mission, requiring pilots specifically trained to coordinate rescue missions, escort helicopters, and suppress enemy forces. According to Air Force and combatant command officials, there is an enduring requirement for CSAR, including CSAR-Sandy. The A-10 is currently the only DOD platform assigned to this mission and every combat-coded squadron has CSAR-Sandy qualified pilots. Training requirements for CSAR-Sandy qualification are very high due to the complexity of the mission. Gaining and retaining CSAR-Sandy qualification is also resource intensive because it requires many aircraft, according to Air Force and combatant command officials. According to Air Force officials, the A-10 platform has certain capabilities that make it well suited for the CSAR-Sandy mission, including long loiter time, communications capabilities, survivability, forward-firing munitions, and ability to fly low and slow. The Air Force assessed the feasibility of using F-16s or F-15Es for the CSAR-Sandy mission and concluded aircrews could not conduct both the training necessary for this mission and the training required for their existing missions. The assessment, completed in September 2015, recommended that F-15Es or F-16s should not be tasked with the Sandy role without adequate training and also noted that the aircraft would require a number of upgrades for the CSAR-Sandy mission. The Air Force has not formally determined what aircraft, if any, will replace the A-10 for this mission, according to Air Force officials. Figure 7 illustrates the CSAR-Sandy roles and a further description can be found in appendix IV, which discusses missions conducted by the A-10. Counter Fast Attack/Fast Inshore Attack Craft (CFF) is a secondary mission for a number of Air Force fighters, including the A-10, but we found it is an important mission in several theaters. Potential adversaries could use groups of small boats employing swarming tactics to attack maritime assets. In June 2015, we reported that Air Force analysis indicated that the A-10 is the best single Air Force platform for the CFF mission. Further, an Air Force analysis that looked at future risks concluded that divestment of the A-10 was a risk driver in one of the scenarios studied due to the loss of its CFF capability. Air Interdiction is a very broad mission category, and a secondary mission for A-10s. However, the A-10s provide important Air Interdiction capabilities, according to combatant command officials. According to the officials, the A-10’s long loiter time, large weapons load, and diverse set of weapons make it a critical asset. Further, focused low-altitude pilot training, combined with the A-10’s flight characteristics, enable A-10s to effectively operate at low altitude in adverse weather conditions, which is critical in locations where the weather is often unfavorable, according to the officials. According to Air Force officials, Joint Terminal Attack Controllers provide a vital link between the Army and Air Force, directly calling in air support as well as advising and providing expertise to ground commanders on air support. Demand for Joint Terminal Attack Controllers has grown significantly since 2003 and exceeds supply. The Air Force has the largest number of Joint Terminal Attack Controllers in DOD, followed by Special Operations Command, according to the Joint Staff. The A-10 community provides significant support for Air Force Joint Terminal Attack Controller certification and qualification training; and A-10 divestment could exacerbate existing training challenges. From March 2010 to March 2016, A-10s provided 44 percent of aircraft support for Air Force Joint Terminal Attack Controller certification training, according to Air Force data. The Air Force does not centrally track qualification training, but Air Force officials said that the level of A-10 support has been similar to certification training support. The quality of Joint Terminal Attack Controller training support provided by the A-10 community is also better than the support provided by other Air Force platform communities, according to DOD officials. The A-10’s wide variety of ordnance gives Joint Terminal Attack Controllers more options and allows them to deal with a larger variety of situations than they would using other aircraft. DOD officials involved with Joint Terminal Attack Controllers training told us that the A-10 community generally provides better quality training opportunities because of its high level of CAS expertise and knowledge of the standards as well as deeper understanding of how ground forces operate. The A-10 community is also highly sought-after by partner nations for their own Joint Terminal Attack Controller training, which is an important component of theater cooperation efforts, according to officials from EUCOM and U.S. Pacific Command. The Air Force recognizes that A-10 divestment could affect the missions currently performed by the A-10, and is taking a number of mitigation steps, including establishing an Air Force group focused on CAS, developing new weapons, and addressing the needs of Joint Terminal Attack Controllers. Although the Air Force will begin divesting its A-10 units in fiscal year 2018 under the current proposal, mitigation efforts are still being developed. Additionally, the Air Force has not yet determined the extent to which it will change or reprioritize training requirements for aircrew of other aircraft as a result of A-10 divestment – a decision that could significantly affect a range of missions. Examples of planned mitigation steps are described in table 3. Another step the Air Force could take to mitigate the loss in expertise associated with A-10 divestment would be to change or reprioritize training requirements for aircrew of other aircraft. However, the Air Force has no concrete plans to do so and the delay in A-10 divestment has removed some of the urgency to develop such plans, according to Air Force officials. Changing training requirements comes with a cost, however. Air Force officials cautioned that aircrews have limited time in which to conduct their training, and in recent years, aircrews have struggled to complete their expected training. Units have had low completion rates for their secondary missions and, in many cases, have had low completion rates for their primary mission training requirements. If pilots who fly multi-role aircraft were required to increase their training in CAS, FAC(A), CSAR, CFF, and/or Air Interdiction, they would have less time available to train for other missions, and completion rates for training on these other missions would likely fall even lower than they are today. Since the A-10 trains more on CAS than any other platform and has a higher training requirement to gain proficiency, transferring those responsibilities to another platform or platforms would represent a substantial addition to existing training requirements for those platforms. Moreover, CAS is a lower priority mission for the Air Force compared to others, making it less likely that the Air Force would increase CAS training for multi-role fighters. The Air Force’s ability to determine the effectiveness and necessity of its mitigation strategies is currently limited, because it does not have clear requirements for CAS and the other missions performed by the A-10, though it has recently begun examining them. One of the difficulties in establishing a CAS requirement is that it is a fluid mission that can vary considerably according to circumstances. Unlike some missions where there are defined targets in known locations, CAS depends on the actions and interactions of enemy and friendly ground forces, making it more difficult to analyze, according to Air Force and combatant command officials. The Army—the Air Force’s prime CAS customer—also has not defined its CAS needs, according to Air Force officials. However, Army officials stated that the CAS requirements developed by the Army in collaboration with the Air Force in the 1980s continue to apply even as the Army is working with the Air Force on several efforts to further define future CAS requirements. Further, the Air Force has not defined its FAC(A) requirements or CSAR requirements. The Air Force, in consultation with the combatant commands, manages current requirements by assigning missions, such as CAS, FAC(A) and CSAR, and mission priorities to its current force, according to Air Force officials. However, the Air Force has not clearly defined its future needs in these mission areas. As discussed earlier, in March 2016, the Air Force initiated a comprehensive force structure study that will include examining its requirements for CAS and the other missions performed by the A-10, according to Air Force officials. Clear requirements are an example of the type of quality information the Air Force would need to fully identify the capacity or capability gaps and risks that could result from A-10 divestment and determine appropriate mitigation strategies. Though Air Force officials stated that A-10 divestment was the best option available under its budget circumstances, the absence of clear requirements hinders the ability of the Air Force to analyze its gaps and prioritize its decisions. The Air Force has identified potential challenges associated with A-10 divestment. For example, the Air Force has identified a need for preserving CAS culture and developing a light attack CAS aircraft. The CAS experts convened by the Air Force in 2015 stated there will be a CAS capability and capacity gap following the divestment of the A-10. However, the Air Force has been hampered in its ability to determine the significance of any reductions in CAS capabilities that result from A-10 divestment, because it does not have a requirement to assess against. This, in turn, limits the Air Force’s ability to weigh risks and choose appropriate mitigation strategies. For example, an examination of CAS requirements could shed light on the relative importance of the capability to destroy moving and armored targets, something the A-10 does well. Should DOD determine that it is not an important capability, the Air Force could focus its limited resources on developing higher priority capabilities. The Air Force also has not made decisions regarding the extent to which limited training resources from other fighter aircraft need to be shifted to missions currently performed by the A-10. Such decisions are difficult to weigh without understanding the reductions in capabilities and potential gaps and risks created in these mission areas by A-10 divestment. The lack of clarity on the risks posed by A-10 divestment is evidenced by the fact that the decision was made without fully understanding the near-term impact on combatant command missions and before key decisions, including the feasibility of CSAR-Sandy replacements, were studied. Without clearly understanding the capability gaps and risks that could result from A-10 divestment before again proposing to divest the A-10, it is unclear how effective or necessary the Air Force’s mitigation strategies will be. DOD may be faced with similar divestment decisions as it seeks to best balance current capacity and capability demands with future needs. The A-10 divestment proposal is a case study of this kind of difficult decision. The Navy faced a similar situation in 2012. In June 2014 we found that the Navy, although it has a policy to guide divestment decisions, had not followed its policy when it decided in 2012 to decommission seven cruisers and two dock-landing ships well prior to the end of their service lives. The Navy’s policy requires a decision memorandum in such circumstances to address why it is in the best interest of the Navy to decommission the ships and mitigation strategies for any resulting capability gaps. Navy officials told us that they did not prepare the decision memorandum because they were under time pressure to identify budget savings. As with the A-10, Congress did not support the Navy’s decision. We also found in June 2014 that the Navy policy does not require the Navy to evaluate risks associated with shortfalls in the number of ships—i.e., capacity—in making decommissioning decisions. In this case, the Navy recommended decommissioning large surface combatants and amphibious ships when it was simultaneously reporting shortfalls in those same ship types to support its shipbuilding plans. Overall, DOD does not have guidance to help ensure that the services are collecting quality information needed to inform decisions for divesting major weapon systems before the end of their service lives. As the Air Force and Navy examples indicate, the services have made divestment proposals to emphasize modernization efforts without fully understanding and documenting the potential operational effects of those proposals. Without quality information that fully identifies capability and capacity gaps and associated risks resulting from divestment, the services and DOD will lack information they need to develop effective mitigation strategies, and DOD may not be well-positioned to balance current demands and future needs. Overall, the Air Force did not meet all best practices in estimating cost savings from A-10 divestment, which affected its ability to determine comparable alternatives. In its fiscal year 2015 divestment proposal, we found the Air Force’s cost estimates partially met best practices for being comprehensive, minimally met best practices for being well-documented and accurate, and did not meet best practices for being credible. Because the Air Force’s cost estimate did not meet best practices in these areas, the 2015 proposal potentially overstated or understated the actual savings from A-10 divestment. Additionally, Air Force officials stated they used similar practices to estimate cost savings when developing budget requests for fiscal years 2016 and 2017, thereby continuing to potentially overstate or understate the actual savings from A-10 divestment. As we reported in June 2015, the Air Force did not fully assess the cost savings and implications associated with the A-10 divestment or its alternatives. In its fiscal year 2015 budget request, the Air Force estimated that divesting the A-10 would allow it to save $4.2 billion over its 5-year budget plan. However, we found the Air Force did not include certain costs related to the A-10 divestment. For example, A-10 divestment could increase the operational tempo of remaining CAS- capable aircraft, which could increase costs related to extending the service lives of those remaining aircraft. To the extent that this occurs, it would reduce the actual savings from the A-10 divestiture below the estimated $4.2 billion. Alternatively, we found that savings could be greater than $4.2 billion, because the Air Force estimate did not include potentially significant costs for things such as software upgrades or structural enhancements that it could incur if it were to keep the A-10. In addition, we found in June 2015 that, in presenting its budget to Congress, the Air Force provided a number of alternatives to A-10 divestment that it said would also result in approximately $4.2 billion in cost savings. However, these alternatives were rough estimates that were illustrative only and not fully considered as alternatives to A-10 divestment, according to Air Force officials. When we compared the Air Force’s estimate to best practices, we found it did not meet all best practices when estimating savings from the A-10 divestment for its fiscal year 2015 budget. The GAO Cost Estimating and Assessment Guide lists 20 best practices for a reliable cost estimate. We collapsed these best practices into four general characteristics for sound cost estimating, specifically that a sound cost estimate be (1) comprehensive, (2) well-documented, (3) accurate, and (4) credible. While the cost guide is typically used across the federal government to support decisions for investments in capital programs, the best practices in this guide also apply to cost estimates for other purposes, including decisions to fund one program over another. Since the Air Force used estimated cost savings as part of its justification for retiring the A-10 among other divestment alternatives, we believe these best practices are applicable for assessing the reliability of the Air Force’s A-10 cost savings estimate. Table 4 provides a summary of our assessment of the Air Force’s A-10 cost estimate against these four characteristics. The Air Force used cost estimation practices similar to those used for the fiscal year 2015 budget process to estimate A-10 cost savings for the fiscal years 2016 and 2017 budgets, according to Air Force officials. In its fiscal year 2016 budget request, the Air Force estimated that A-10 divestment would amount to $ 4.7 billion in savings over its 5-year budget plan. In its fiscal year 2017 budget request, the Air Force estimated that retaining the A-10 under its revised divestment plan would cost $3.4 billion over 5 years. By applying similar cost estimation practices from its fiscal year 2015 budget process, the Air Force’s fiscal year 2016 and 2017 A-10 divestment cost estimates may continue to overstate or understate the actual figure and may not be reliable, as we found the 2015 estimate to be. As we reported in June 2015, the A-10 divestment proposal emerged from the Air Force’s budget development process for fiscal year 2015, which was driven by DOD and Air Force guidance to reduce top-line funding. Following DOD strategic and budget guidance, the Air Force sought to prioritize, among other things, fifth-generation aircraft like the F- 35, readiness, and multi-role aircraft, while placing a lower priority on single-role aircraft like the A-10. According to Air Force officials, significant research, operational analysis, and strategic planning are combined during the budget development process to give senior leadership the correct information to make major force structure decisions, such as divesting aircraft (see app. I for details of the budget development process that led to the fiscal year 2015 A-10 divestment proposal). Although the A-10 divestment cost savings estimate follows some cost estimating best practices, it largely was developed using budget guidance. Air Force and DOD budget guidance documents do not require cost estimates for divestments, and therefore the A-10 cost savings estimate did not follow best practices and include certain elements, such as all life-cycle costs or sensitivity analysis that identifies a range of possible costs based on varying assumptions. According to Air Force cost estimation guidance, it is understandable that decision makers need point estimates and not a range of possible costs when preparing and managing a budget. However, by making a major divestment decision within the constraints of its budget development process, the Air Force and DOD based the proposal to retire the A-10 on a point estimate, without insight into the probability of achieving those savings. Overall, since the A-10 divestment estimate did not meet all best practices, the Air Force cannot ensure that it has a reliable estimate of the cost savings it could generate by divesting the A-10. Without developing a reliable cost estimate based on best practices, the Air Force is at risk of continuing to make decisions regarding the A-10 without full knowledge of the cost implications. As we reported in June 2015, the Air Force presented a number of alternative options that would result in similar savings as A-10 divestment, with the highest risk option being deferring some F-35 procurement. By developing a high-quality, reliable cost estimate of savings from A-10 divestment, the Air Force would have a sound basis from which to develop and compare alternatives and their associated risks that achieve similar savings or make adjustments to other fighter- attack programs or mission areas like air superiority or global strike. In addition, we did not find DOD-wide budget guidance requiring cost estimates for divestment decisions on other major weapon systems. Without this guidance, DOD may not be able to develop a high-quality, reliable cost estimate of savings when divesting other major weapon systems in the future and experience difficulty identifying alternatives for achieving similar cost savings. As late as fiscal year 2014, the Air Force had planned to keep its A-10 fleet through at least 2035, but faced with an increasingly constrained fiscal environment, it determined that divesting the aircraft was a necessary step to balance its current and future needs. However, it made this decision as part of its fiscal year 2015 budget deliberations without fully examining the implications of this course of action. The upcoming DOD evaluation of F-35 CAS capabilities and the Air Force efforts under way to evaluate its force structure requirements are positive steps forward that should provide a better basis from which the Air Force can evaluate the implications of A-10 divestment and determine the appropriate path forward, which may or may not include early divestment. However, the fiscal year 2017 budget request marks the third consecutive year that the Air Force proposed divesting the A-10 without having determined its requirements for the A-10’s missions and the gaps and risks resulting from divestment. As a result, it is unclear how effective or necessary its mitigation strategies will be. A recent example illustrates this lack of clarity. In its fiscal year 2017 budget proposal the Air Force deferred some F-35 procurement—an option the Air Force originally identified as the highest risk alternative to A-10 divestment. Should it continue to pursue the early divestment of the A-10 fleet as a way to balance current demands and future needs, the Air Force would benefit from quality information that fully identifies capacity and capability gaps and associated risks resulting from divestment and it could use that information to develop mitigation strategies. Additionally, a high-quality, reliable cost estimate would provide the Air Force with a sound basis from which to develop and consider alternatives to achieve its budget targets. More broadly, the lack of quality information to support A-10 divestment reveals a weakness in how DOD may make future decisions to divest major weapon systems, because the department lacks guidance on how to approach such decisions. Department officials could find themselves in the position where they must again consider divesting legacy platforms as a means to achieve savings that can then be applied to their modernization plans. Should that happen, the department will need guidance to ensure that DOD is collecting the quality information it needs to fully consider the consequences of such divestments—consequences that can be both operational and financial. Such guidance could help to ensure that DOD’s examination of divestment options includes the quality information needed to fully identify gaps and associated risks resulting from divestment that can then be used to develop effective mitigation strategies. Further, it could help to ensure that DOD uses high-quality, reliable cost estimates that better position the department to identify alternatives for achieving similar cost savings in the future. Without this guidance, DOD may continue to face congressional challenges to future divestment proposals and take unnecessary risks as it continues to balance current demands and future needs. To make a well-informed decision about the future of its A-10 aircraft, we recommend that before again recommending divestment of the A-10, the Secretary of the Air Force: Develop quality information that fully identifies gaps in capacity or capability that would result from A-10 divestment, including the timing and duration of any identified gaps, and the risks associated with those gaps; and Use that information to develop strategies to mitigate any identified gaps. In addition, to further inform decisions about the future of the A-10, we recommend the Secretary of the Air Force, in considering divestment, develop a high-quality, reliable cost estimate utilizing best practices. As DOD faces future decisions on how to balance its existing capabilities and capacities against future modernization requirements, it will need quality information to help inform such decisions. To ensure that senior leaders have the quality information on which to base future force structure decisions, we recommend the Secretary of Defense develop and promulgate department-wide guidance that establishes specific informational requirements to be met before proposing divestment of major weapon systems that have not reached the end of their expected service lives. This guidance should require identifying gaps in capacity or capability that will occur for the proposing service and any other service if the divestment proposal is approved; recommending strategies for mitigating any identified gaps; and developing a high-quality, reliable cost estimate of the major weapon system proposed for divestment that can be used to identify alternatives for achieving similar savings. In written comments on a draft of the July 2016 classified report, the Secretary of the Air Force, on behalf of DOD, non-concurred with all three of our recommendations. The department subsequently provided an unclassified version of those comments, which are included in this report, in appendix V. The complete classified response and our evaluation of those comments are in the classified report (GAO-16-525C). DOD also provided technical comments, which we have incorporated as appropriate. The Air Force, on behalf of DOD, non-concurred with our recommendation that the Secretary of the Air Force should, before again recommending A-10 divestment, develop quality information that fully identifies gaps in capacity or capability that would result from A-10 divestment, and use that information to develop strategies to mitigate any identified gaps. In its comments, the Air Force stated that it took exception to GAO’s assertion that the Air Force made the decision to divest the A-10 without knowledge or understanding of the associated risk and capability gaps. Both in this report and our classified preliminary observations report (GAO-15-600RC), we detail the process that led to the divestment proposal and explain how fiscal constraints and strategic priorities, including prioritizing fifth generation fighters like the F-35, drove the Air Force decision. We also recognize that the Air Force conducted some analysis on the effects of A-10 divestment and is taking some mitigation steps. However, since divestments, like investments, can have far-reaching cost and operational consequences, such decisions should be based on quality information that would include, among other things, clearly identifying the gaps created by the action and strategies for mitigating those gaps. In our report, we identify numerous areas where significant gaps in knowledge persist years after the Air Force decided to pursue A-10 divestment. For example, we found that the full extent to which the divestment proposals create capacity gaps and increase risk is difficult to determine, because DOD does not have a clearly established Air Force fighter aircraft capacity requirement. Further, we found that the Air Force has not comprehensively assessed potential mission capability gaps caused by A-10 divestment or the effects of divestment on its ability to support Joint Terminal Attack Controller training. As we describe in our report, though the Air Force and DOD are taking steps to mitigate potential gaps, they have not established clear requirements for the missions that the A-10 performs, including CAS, FAC(A), and CSAR- Sandy, and in the absence of these requirements, have not fully identified the capability gaps and risks that could result from A-10 divestment and the effectiveness or necessity of the Air Force’s and the department’s mitigation strategies. We recognize that the upcoming DOD evaluation of F-35 CAS capabilities and the Air Force efforts under way to evaluate its force structure requirements are positive steps forward that should provide a better basis from which the Air Force can evaluate the implications of A-10 divestment and determine the appropriate path forward. However, the Air Force does not yet have the quality information it needs to make a well-informed decision about the future of its A-10 aircraft. In its response, the Air Force also stated that we failed to highlight Air Force analysis that indicated the A-10 divestment was the most acceptable strategy, specifically citing two classified documents as evidence that it had the necessary information to support its divestment decision. The Air Force’s classified response included a third document. However, these three documents have significant limitations. Both the Air Force summary of these documents and our analysis of their limitations are classified and therefore they are not included in this report. They can be found in GAO-16-525C. The Air Force’s response that it had the necessary information to make an informed divestment decision is not consistent with the actions it made subsequent to the analyses it cited. For example, a year after proposing to divest the A-10, the Air Force convened a group of CAS experts to, among other things, examine the state of CAS affairs and examine gaps. We also reported that in March 2016 the Air Force initiated a comprehensive force structure study that will include examining its requirements for CAS and other missions performed by the A-10. It is also studying the requirements for a future weapon system to provide CAS in a permissive environment. Our report also notes that a September 2015 Air Force study identified challenges to replacing the A-10 in the CSAR-Sandy role and that the service has not yet settled on a replacement. While the analysis identified by the Air Force in its comments may have been sufficient at the time to help inform much of the fiscal year 2015 budget deliberations, we believe that, because of their far-reaching cost and operational consequences, divestment decisions, like investment decisions, should be based on a higher standard of information. The findings of our report show that significant information gaps remain despite the initial and subsequent Air Force analyses and therefore we believe our recommendation remains valid. In addition, the Air Force did not concur with our recommendation to develop a high-quality, reliable cost estimate utilizing best practices to further inform decisions about the future of the A-10 but without much explanation. In its response, the Air Force disagreed with our characterization that such criteria were not used in the A-10 divestment considerations and stated that high-quality internal data were used to develop accurate cost estimates based on existing best practices. In our report, we recognized that the Air Force used programming and sustainment data to inform their cost estimate, such as weapons system sustainment, flying hours, and military personnel. In addition, we do not state that the Air Force did not use criteria in its A-10 divestment consideration but rather describe, in detail, the aspects of the A-10 cost estimate that did and did not meet best practices. Specifically, we describe the estimate as partially meeting best practices for being comprehensive, minimally meeting best practices for being well- documented and accurate, and not meeting best practices for being credible. Further, and as summarized in the scope and methodology section of this report, we sent our analysis to the Air Force for feedback prior to publication and they agreed with our assessment. A high-quality, reliable cost estimate would provide the Air Force with a sound basis from which to consider alternatives to achieve its budget targets. We therefore continue to recommend that the Air Force enhance the quality and reliability of its A-10 cost estimate by utilizing these best practices. Finally, the Air Force, on behalf of DOD, did not concur with our recommendation to provide senior leaders with quality information by developing and promulgating department-wide guidance that establishes specific informational requirements to be met before proposing divestment of major weapon systems that have not reached the end of their expected service lives. The response stated that the department already has guidelines and robust procedures in place to provide senior leaders with quality information with which to make divestment decisions, including through budgeting and acquisition processes. As we reported, the A-10 divestment proposal came out of the fiscal year 2015 budget development process. We cited key information gaps that remain despite the department proposing to divest the A-10 in three consecutive budget proposals. The response also stated that in cases where it is considering developing a new weapon system to replace existing capabilities, it conducts a thorough Analysis of Alternatives that examines the factors identified in the GAO recommendation in order to provide senior leaders with quality information. As our report shows, this was not the case for the A-10 divestment and has not been the case for other divestment proposals in the past. Proposals like the A-10 divestment and the Navy’s 2012 proposal to decommission seven cruisers and two dock-landing ships well prior to the end of their service lives were made in the context of the budget process, not as part of a proposal to develop new systems. As such, the Analysis of Alternatives described by DOD in its response is not applicable. Therefore, in order to ensure senior leaders have the quality information DOD agrees they need, we continue to believe that DOD needs to develop and promulgate guidance to help ensure that the department and services are collecting the quality information necessary to inform decisions for divesting major weapon systems before the end of their service lives. Without this guidance, DOD may continue to divest weapon systems and overlook the kinds of capability, capacity, and cost issues we point out in this report, which ultimately hinders DOD’s ability to best balance current demands and future needs. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Chairman of the Joint Chiefs of Staff; and the Secretaries of the Air Force, Army, and Navy. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The Air Force proposal to divest the A-10 was the result of fiscal constraints and a strategy-based, portfolio-wide review of alternatives. Air Force budget guidance for fiscal year 2015 stated that it needed to reduce its previously planned spending by 11.5 percent over the 5-year budget. In developing its fiscal year 2015 budget request at lower-than- anticipated levels, the Air Force examined its entire portfolio and concluded, among other things, that the benefits of divesting the A-10 outweighed the cost of retaining it. Department of Defense (DOD) and Air Force strategic priorities guiding the Air Force proposal included fifth generation aircraft, such as the F-35; high-end intelligence, surveillance, and reconnaissance capabilities; and multi-role aircraft over single-role aircraft. With a smaller total force, multi-role fighters provide commanders with greater operational flexibility. For example, F-16s and F-15Es not only perform close air support (CAS) missions but can also conduct air-to- air missions, which the A-10 generally cannot. DOD reviewed and approved the Air Force A-10 divestment decision and submitted it as part of its fiscal year 2015 budget request. Figure 8 describes the fiscal year 2015 Air Force budget development process. According to Air Force officials, the Air Force did not re-examine this decision or conduct additional analysis for the fiscal year 2016 budget request, which also proposed divesting the A-10 by the end of fiscal year 2019. Citing rising demands caused by operations against the Islamic State of Iraq and the Levant (ISIL) and growing concerns about Russia, the Air Force fiscal year 2017 budget request temporarily reversed its decision to divest the A-10 fleet by fiscal year 2019. DOD has not recently evaluated the distribution of CAS responsibilities and capabilities among the services, but officials believe DOD would likely incur significant costs and operational challenges if it were to transfer the A-10 from the Air Force to the Army or Marine Corps. For example, Air Force officials said the Air Force owns and distributes its targeting and jamming pods across several fleets, including the A-10; therefore, the Army or Marine Corps would need to purchase targeting and jamming pods for the A-10 fleet if the Air Force transferred its A-10s to them. In addition, existing Army and Marine Corps facilities and runways may need to be enhanced to support the A-10s. Army and Marine Corps officials also cited several cost-related issues. According to Army officials, Army Aviation already consumes a large portion of the Army’s budget and the A-10 fleet transfer would not likely be accompanied by increased funding. This would force the Army to sacrifice resources from other aviation priorities. Similarly, the Marine Corps does not want to operate and maintain an aging fleet of A-10s, because it would divert resources away from current modernization efforts. The Marine Corps also prefers aircraft with “from the sea” capabilities and the A-10 does not operate from Navy ships. Service officials stated that the services have different perspectives on the tactical application of CAS that could affect training if the A-10 fleet was transferred from the Air Force. Air Force officials see the A-10 as a theater-wide air asset and believe that the Army would tie A-10s to the division or brigade level, thereby generating situations where an Army ground commander could be reluctant to use the A-10 outside of his battle area. Air Force officials also noted that transferring the A-10 to another service would create an overlap of responsibilities with Air Force CAS-capable platforms, such as F-16, and require years to redefine joint fires doctrine and training on new tactics, techniques, and procedures. Marine Corps officials stated that the primary purpose for Marine Aviation—the Air Combat Element specifically—is to provide support for the Ground Combat Element as part of an integrated campaign. Typically, Marine Aviation is not made available for joint tasking, unless there is excess capacity. The distribution of CAS responsibilities and capabilities among the services has been discussed since World War II but has not seen significant debate since 1989. Table 5 provides a chronological summary of key CAS events set within the context of ongoing wars or operations (purple rows), and procurement actions (blue rows). It also shows how similar CAS issues have remained over the years. To assess the extent to which the Air Force and the Department of Defense (DOD) have the quality information needed to understand the implications of A-10 divestment, we assessed strategic guidance, memorandums, aircraft inventory, training syllabi, and other documentation against DOD guidance on economic analysis for decision- making, Air Force guidance on business case analysis procedures, and GAO knowledge-based criteria. DOD guidance and GAO knowledge- based criteria identify key factors that, while developed for investment decisions, are applicable to making divestment decisions. These key factors include, among other things, having clearly defined and understood requirements that provide a baseline from which to identify gaps and their associated risks and inform decisions on how to best address the gaps. Specifically, we reviewed documents—such as the DOD Global Force Management Implementation Guidance and DOD Directive 8260.05 on the Support for Strategic Analysis—that describe how the combatant commands are to identify force requirements and request resources for current operations and how the services are to explore potential future force structure requirements. We met with officials to understand the extent to which the Air Force used these processes to specifically assess current and future force structure requirements and gaps for the range of missions conducted by the A-10 and develop corresponding mitigation options. To assess the reliability of Air Force A-10 squadron divestment data, we reviewed Air Force briefings that describe the divestment phasing of A-10 squadrons by Air Force base and fiscal year and confirmed our interpretation of the data in these briefings with Air Force officials. To assess the reliability of Air Force close air support (CAS)-capable inventory data, we compared Air Force data with an inventory graphic from the Air Force’s fiscal year 2017 budget briefing to Congress and discussed it with Air Force officials. We found both sources of data sufficiently reliable for our purposes of providing a general comparison of the three recent A-10 divestment proposals and showing a general trend in Air Force-projected inventory. We also reviewed training requirements in Air Force Ready Aircrew Program Tasking Memorandums as well as initial qualification and advanced course syllabi for the A-10, F-15E, F-16, and F-35 to compare the levels of CAS knowledge taught to the pilots of each aircraft. We met with officials to determine whether the Air Force used these requirements to assess training expertise that could be lost by divesting the A-10 and develop mitigation options. We also reviewed classified reports describing the assumptions and scenarios used to analyze risk levels associated with several Air Force divestment options to determine whether the Air Force specifically assessed the effect that A- 10 divestment would have on conducting CAS and several other A-10 missions. We did not, however, assess the reasonableness of the scenarios or assumptions, because they were derived from DOD guidance to all services and were outside the scope of this review. To assess the Air Force’s estimate of A-10 cost savings, we analyzed the Air Force’s cost estimating approach against best practices in the 2009 GAO Cost Estimating and Assessment Guide. GAO designed the cost guide to be used by federal agencies to assist them in developing reliable cost estimates and also as an evaluation tool for existing cost estimates. To develop the cost guide, GAO cost experts assessed measures applied by cost-estimating organizations throughout the federal government and industry and considered best practices for the development of reliable cost estimates. We analyzed the cost-estimating practices used by the Air Force against these best practices. For our reporting needs, we collapsed these best practices into four general categories representing practices that help ensure that a cost estimate is reliable: specifically, that it is (1) accurate, (2) well documented, (3) comprehensive, and (4) credible. After a review of all source data, all supporting documentation, interviews with cognizant officials, and independent research, we assessed the extent to which the Air Force met these best practices on a five-point scale: Not Met—Air Force provided no evidence that satisfies any of the criteria. Minimally Met—Air Force provided evidence that satisfies a small portion of the criteria. Partially Met—Air Force provided evidence that satisfies about half of the criteria. Substantially Met—Air Force provided evidence that satisfies a large portion of the criteria. Met—Air Force provided complete evidence that satisfies the entire criteria. We determined the overall assessment rating by assigning each individual best practice a number: Not Met = 1; Minimally Met = 2; Partially Met = 3; Substantially Met = 4; and Met = 5. For the purposes of this assessment we also included a Not Applicable (N/A) assessment category. Then, we took the average of the individual best practice assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the overall assessment as follows: Not Met = 1 to 1.4; Minimally Met = 1.5 to 2.4; Partially Met = 2.5 to 3.4; Substantially Met = 3.5 to 4.4; and Met = 4.5 to 5.0. We had an analyst independently rate each individual best practice and then had a supervisor verify the analyst’s rating against Air Force documentation. Finally, we sent our analysis to the Air Force for feedback and gave the Air Force an opportunity to provide additional documentation if it disagreed with our scores. We shared this detailed analysis with the Air Force, and it agreed with our assessment. We reviewed DOD and Air Force documentation and met with knowledgeable officials to understand the process leading to the fiscal year 2015 A-10 divestment proposal and how DOD has evaluated options for CAS over the years. To describe the process, including any consideration of alternatives, and priorities that led to the Air Force’s A-10 divestment proposal, we reviewed Air Force briefing slides and classified reports summarizing the priorities, assumptions, and scenarios used to assess several fiscal year 2015 budget options. To describe how DOD has evaluated options for redistributing CAS responsibilities, including the feasibility of transferring the A-10 fleet to the Army or Marine Corps, we reviewed historic documents—such as the Key West agreement of 1948—and interviewed knowledgeable Air Force, Army, and Marine Corps officials. Due to the potentially large number of proposals for redistributing CAS force structure and service responsibilities over the years, we limited our scope to a selection of proposals that originated from DOD and were reviewed by the senior-most levels of the department. In addition, we vetted our time line of key CAS events with historians from the Naval History and Heritage Command and the Air Force Historical Support Division. We did not have representatives from the Army Center of Military History and the Marine Corps History Division review the time line but believe our analysis of historic documents, input from other service historians, and interviews with officials from the Army and Marine Corps were sufficiently reliable for our purposes of describing a select history of CAS from World War II to the present day. We interviewed officials across DOD and the services to determine whether our assessment of DOD information was factually accurate and obtained input, as appropriate, from the following offices: Office of the Secretary of Defense, Cost Assessment and Program Office of the Under Secretary of Defense for Acquisition, Technology Office of the Director, Operational Test and Evaluation U.S. Central Command, U.S. European Command, U.S. Pacific Command, U.S. Forces Korea, and U.S. Special Operations Command; and U.S. Air Force, Army, Navy, and Marine Corps. To better understand training and operational issues relevant to the A-10, we met with units at Davis-Monthan, Nellis, and Osan Air Force bases, as well as the 175th Wing of the Maryland Air National Guard. We chose these locations based on factors such as the training and operational expertise resident in some of these locations and discussions with Air Force officials. We conducted this performance audit from June 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains information on the three primary missions currently assigned to the A-10 as well as the role it plays supporting the training of Joint Terminal Attack Controllers. Each section begins with a definition of the mission, the mission’s relevance, and the A-10’s role in the mission, including potential impacts of A-10 divestment. The A-10 is required to be proficient in its primary missions – Close Air Support (CAS), Forward Air Controller (Airborne) (FAC(A)), Combat Search and Rescue – Sandy (CSAR-Sandy)–and familiar with its secondary missions–Counter Fast Attack Craft/Fast Inshore Attack Craft (CFF) and Air Interdiction (AI). We excluded further discussion of the secondary missions in this appendix because they are classified. The additional details can be found in the classified version of this report (GAO-16-525C). Air action by fixed-wing and rotary-wing aircraft against hostile targets that are in close proximity to friendly forces and that require detailed integration of each air mission with the fire and movement of those forces. Ground commanders have relied on CAS to supply the majority of their fire support in combat operations over the last 12 years, according to the Joint Staff. CAS provides ground commanders with flexible and responsive support and, under some circumstances —including airborne assaults, counter-insurgency operations, and special operations—may be the only fire support available. The Air Force is the primary supplier of CAS to the Army. Unlike some missions where there are defined targets in known locations, CAS is a dynamic mission whose needs change depending on the actions and interactions of enemy and friendly ground forces, making it more difficult to model, according to Air Force officials. A-10 divestment could result in a reduction in Air Force CAS expertise. Department of Defense (DOD) doctrine and officials across DOD identify training as a key condition for effective CAS. DOD doctrine states maintaining proficiency through training allows aircrews to adapt to rapidly changing conditions in the operational environment. Although many platforms have performed CAS in the past decade, A-10 pilots are considered the Air Force’s CAS experts due to the amount and depth of their CAS training. The A-10 pilots’ CAS focus begins at initial qualification training, where they spend significantly more time focused on CAS in their lectures, simulator training, and sorties than pilots of other Air Force CAS-capable fighters. During initial qualification training, pilots of multi-role platforms, such as the F-16 and F-15E, receive a comparatively smaller fraction of CAS training because of the many other missions on which they must focus. This differential in CAS focus extends to yearly training requirements and through the advanced-level Weapons Instructor Course, which is the graduate-level training for elite Air Force pilots. Fewer sorties are required to retain CAS proficiency in the F-15E, F-16 and F-35A than in the A-10. In the advanced-level Weapons Instructor Course, A-10 pilots fly more CAS sorties and train against far more complex CAS scenarios than other Air Force fighter pilots. Table 6 summarizes the training sortie requirements for pilots of Air Force CAS- capable fighters along with the mission priority of CAS for each aircraft type. CAS expertise becomes more important as conditions become more complex, according to Air Force officials. However, much of the CAS provided over the last decade in Afghanistan and Iraq has been in environments where threats to the aircraft were low, where CAS often consisted of dropping bombs on coordinates, and where squadrons had months to prepare for their CAS-focused deployments, according to Air Force officials. The CAS experts convened by the Air Force in 2015 found that a broad range of aircraft have become good at providing CAS in these permissive environments. The advantages of A-10 CAS expertise may not be as significant under these circumstances but become more pronounced in contested environments when a wider CAS skillset is needed, according to Air Force and combatant command officials and DOD is planning on conducting CAS in contested environments in the future. Loss of the A-10 airframe will also cause a decrease in Air Force CAS capability. Senior DOD leaders have stated that the A-10 is the Air Force’s best CAS aircraft. The CAS experts convened by the Air Force in 2015 concluded that A-10 divestiture creates a gap because the Air Force is losing a high-capacity and cost-efficient ability to kill armor, moving, and close proximity targets in low weather conditions. Table 7 provides a summary of some A-10 CAS advantages. Although the A-10 has a number of advantages that are highlighted in table 7, the dynamic nature of CAS means that other aircraft also have some advantages. For example, although the A-10 has a relatively long loiter time and large weapons capacity, a B-1 bomber far exceeds both. While acknowledging the capabilities of other aircraft, officials from the Air Force and combatant commands emphasized that A-10 capabilities stand out in circumstances where enemy forces are close to friendly forces, there are moving and armored targets, and the weather is bad. A specifically-trained and qualified aviation officer who exercises control from the air of aircraft engaged in CAS of ground troops. The FAC(A) also provides coordination and terminal attack control for CAS missions, as well as locating, marking, and attacking ground targets using other fire support assets. FAC(A)s are CAS experts that help to efficiently manage air-to-ground operations. This role is challenging because FAC(A)s must first understand a dynamic situation on the ground and then determine the best way to support the ground commander utilizing available air (e.g. F- 15E, MQ-1, A-10) and ground-based assets (e.g. artillery) that each have unique capabilities and limitations. According to Air Force officials, the Air Force generally chose not to use FAC(A)s during operations in Iraq and Afghanistan. However, according to Air Force officials, FAC(A)s would be invaluable during contested CAS operations, because they would perform reconnaissance and develop battlefield awareness under conditions where intelligence and communications would be much more limited than they have been in Iraq and Afghanistan. FAC(A)s are also important in cases where there are not enough qualified Joint Terminal Attack Controllers authorized to control coalition and allied aircraft, according to Air Force officials. FAC(A)s can also help coordinate actions in a very crowded airspace. In addition, FAC(A)s have a much broader view of the battlespace than Joint Terminal Attack Controllers, which is important in a major conflict, according to combatant command officials. FAC(A)s can also be a significant force multiplier and risk mitigation tool to compensate for an inevitable decline in Air Force CAS proficiency associated with the transition to a multi-role fighter force, according to Air Force officials. FAC(A)s could do so by providing training expertise to pilots in their home squadrons and by managing the CAS fight when operationally deployed. A-10 divestment could result in a reduction in Air Force FAC(A) expertise. All DOD FAC(A)s are required to meet minimum training requirements for certification and qualification retention as established in a memorandum of agreement. However, Air Force FAC(A) training requirements are higher for A-10 pilots than for those of other Air Force aircraft. A-10 FAC(A)s are required by the Air Force to be mission proficient whereas F- 16 FAC(A)s and future F-35 FAC(A)s are only required to be familiar with the mission. A-10 FAC(A)s are required to conduct four times as many yearly training sorties as F-16 FAC(A)s and almost three times as many as future F-35 FAC(A)s. In addition, the A-10 program is the only Weapons Instructor Course that requires all entering students to be FAC(A) qualified and has a training phase specifically dedicated to FAC(A). Moreover, Air Force officials told us that the skills needed for the FAC(A) mission build upon CAS skills. As a result, A-10 pilots have a more robust foundation upon which to build their FAC(A) expertise. The Air Force has not determined the significance of any lost FAC(A) expertise that may be associated with A-10 divestment. A-10 divestment could also result in a decrease in the number of Air Force FAC(A)s. All A-10 fighter squadrons and some F-16 fighter squadrons are assigned a minimum number of FAC(A) pilots on a squadron-by-squadron basis. Although the F-35’s advanced networking and sensor capabilities could make it well suited for the FAC(A) role, according to Air Force and Joint Staff officials, the Air Force has not yet determined how many FAC(A)s its F-35 squadrons will be required to have. Currently, approximately half of the Air Force FAC(A) needs are filled by A-10 pilots. The Air Force does not centrally track the number of FAC(A) pilots it has and has not established a requirement for the number of FAC(A)s it will need in the future. Tactics, techniques, and procedures performed by forces to recover isolated personnel from hostile or uncertain operational environments. The Sandy mission involves aircraft and pilots specifically trained to coordinate rescue action, escort helicopters on combat rescue missions, and suppress enemy forces. CSAR is a highly dynamic and unpredictable mission, unique from other rescue missions in that it is done with little warning, deep in hostile territory, and requires searching for the survivor’s location, according to Air Force and combatant command officials. CSAR-Sandy is a subset of the CSAR mission that requires pilots who are specifically trained to coordinate rescue missions, escort helicopters, and suppress enemy forces. According to Air Force and combatant command officials, there is an enduring requirement for CSAR, including CSAR-Sandy. It is not a mission whose value is easily quantified but they noted that it is part of the ethos of the U.S. military that no servicemember will be left behind. The CSAR-Sandy mission is one way the military fulfills that promise, according to the officials. Moreover, it helps morale and encourages pilots to remain aggressive when conducting their missions. Officials from three combatant commands indicated that their commands have a requirement for CSAR-Sandy forces. Further, CSAR capabilities are very important for assuring potential partner nations and facilitating their participation in operations. According to officials from one command, partner nations often want U.S. CSAR capabilities to be available before agreeing to join in operations. The A-10 is currently the only DOD platform assigned this mission and every combat-coded A-10 squadron has CSAR-Sandy qualified pilots. A- 10s typically conduct the CSAR-Sandy mission using four aircraft designated Sandy 1 through Sandy 4. Sandy 1 - the Rescue Mission Commander – controls recovery efforts and provides protection of the isolated personnel from ground threats. This is a complex task that includes responsibility for planning and directing the actions of all ground forces, air forces, and supporting forces involved in the rescue, including the HH-60 rescue helicopters, aircraft suppressing enemy air defenses, and tankers. Sandy 2 assists the Sandy 1 and acts as the FAC(A), clearing the rescue area of potential threats. Sandy 3 and 4 conduct rescue escort with responsibilities that include conducting reconnaissance, escorting rescue vehicles and helping them navigate the safest possible route, providing communications relay, and finding and neutralizing threats. Helicopters are very vulnerable to small arms fire and so there are many potential threats. According to rescue group officials, qualified Sandy-trained pilots are vital for combat search and rescue capabilities. They also said that a drop in Sandy-trained pilots would restrict the ability of rescue groups to conduct CSAR in volatile environments. Figure 9 provides an example of a CSAR mission and the Sandy roles. Developing CSAR-Sandy qualified pilots requires a lot of training due to the complexity of the mission and the training builds upon skills developed during CAS and FAC(A) training, according to Air Force officials. A-10 pilots that become Sandy-qualified start at Sandy 4 and then work up to Sandy 1 (Rescue Mission Commander), which can take 5-10 years, according to Air Force officials. Sandy 1 and 2 pilots are required to fly a minimum of 12 CSAR training sorties per year in addition to their CAS sorties. A-10 pilots must be Sandy 1-qualified to participate in the A-10 Weapons Instructor Course, which officials described as the graduate level training. During the program, students fly five CSAR- related sorties spanning 10 hours, attend five lectures on CSAR, and participate in a 30-hour practicum that focuses on CAS and CSAR. Gaining and retaining CSAR-Sandy qualification is resource intensive because it requires many aircraft, according to Air Force and combatant command officials. The A-10 platform has certain capabilities that make it well suited for the CSAR-Sandy mission. A-10s are well suited for the Sandy 1 (Rescue Mission Commander) role because of their long loiter time and large communications suite. The A-10 is currently the only Air Force fighter with a radio designed to locate and communicate with DOD’s hand-held emergency radio. A-10 platform characteristics are also useful for the Sandy 3 and 4 roles, where rescue escort aircraft must respond quickly. A-10s are survivable and can fly low and slow, and are able to stay close to the rescue helicopters so they can quickly identify and respond to threats. The A-10’s forward-firing munitions – the 30 mm gun, missiles, and rockets – and tight turning radius allow it to quickly engage and re- engage a variety of targets. A rescue aircraft pilot gave an illustrative example of how, when he is flying at 300 feet and identifies a possible threat ahead, rescue escort A-10s quickly come beside his aircraft, locate the potential target, and take care of it. Other jets fly higher and faster and rely on their targeting pods. The pilot said that he is often over or beyond the potential threat by the time other jets are able to locate it. The Air Force has not formally determined what aircraft, if any, will replace the A-10 for the CSAR Sandy mission. Should the Air Force remain committed to this mission it will need to identify another platform to take on this responsibility, but, according to Air Force officials, there is no obvious replacement for the A-10. The Air Force assessed the feasibility of using F-16s or F-15Es for the CSAR-Sandy 1 role and concluded that aircrews for both aircraft would require extensive training and that their existing missions would prevent such training. Combatant command officials echoed the finding that other aircraft could not be prepared to conduct the CSAR-Sandy mission along with their current missions. The Air Force assessment, completed in September 2015, recommended that F-15E and F-16 aircrews not be tasked with the Sandy 1 role without adequate training, and noted that the aircraft required communications gear, survivability systems, and weapons upgrades. The Air Force has not taken formal actions on these findings, according to Air Force officials. Joint Terminal Attack Controllers Mission Joint Terminal Attack Controllers are qualified (certified) servicemembers who, from a forward position, direct the action of combat aircraft engaged in CAS and other offensive air operations. Joint Terminal Attack Controller Significance Demand for Joint Terminal Attack Controllers has grown significantly over the last decade and exceeds supply, according to DOD data. The Air Force has the largest number of Joint Terminal Attack Controllers in DOD, and according to Air Force officials, Air Force Joint Terminal Attack Controllers provide a vital link between the Army and the Air Force. Air Force Joint Terminal Attack Controllers serve in Army units, advising ground commanders and directly calling in air support. Army officials said they do not anticipate a decrease in the Army’s requirement for Joint Terminal Attack Controllers. A-10 Role in Supporting Joint Terminal Attack Controller Training A-10 divestment could negatively affect the Air Force’s ability to train Joint Terminal Attack Controllers. Joint Terminal Attack Controllers must conduct a minimum number of CAS “controls”—calling in of airstrikes—to be certified or to maintain their qualification. Getting aircraft to support Joint Terminal Attack Controllers training has been increasingly difficult, especially as the number of Joint Terminal Attack Controllers has risen and the aircraft inventory has declined. According to the Joint Staff, the A- 10 divestment will compound training shortfalls already being felt. The loss of A-10 training support is disproportionate to the number of aircraft being divested because the A-10 provides a significant portion of Joint Terminal Attack Controller certification training and qualification training. From March 2010 to March 2016, A-10s provided 44 percent of aircraft support for Air Force Joint Terminal Attack Controller certification training, according to Air Force data. Air Force officials said they do not centrally track qualification training but A-10 support levels are similar to certification training. Officials from several combatant commands also stated that A-10s provide significant support for Joint Terminal Attack Controller training. The F-35’s ability to make up for some of this capacity loss is limited by its inability to use inexpensive and light training munitions that allow aircraft to support more training CAS controls. It also currently lacks video downlink and infrared pointer capability often used in CAS and therefore also important for training. The Air Force also has not yet determined the extent to which it will be able to link F-35 and Joint Terminal Attack Controller simulators, according to officials from the Air Force and Joint Staff. Further, the F-35 has a large number of missions and the extent to which limited flight hours will be made available to support Joint Terminal Attack Controller training is unknown at this point. The quality of Joint Terminal Attack Controller training support provided by the A-10 community is better than that provided by other Air Force aircraft, according to DOD officials. The A-10’s wide variety of ordnance gives Joint Terminal Attack Controllers more options and allows them to deal with a larger variety of situations. DOD officials involved with Joint Terminal Attack Controller training told us that A-10 pilots generally provide better training because of their CAS expertise, knowledge of the standards, and an understanding of how ground forces operate. Officials provided an illustrative example comparing Joint Terminal Attack Controller qualification training support provided by A-10 pilots and pilots from a different Air Force fighter community. A-10 pilots often use detailed notes, maps, and data in detailed debriefs that can last several hours after Joint Terminal Attack Controller training. In the counter-example, the training debrief provided by the pilots from a different fighter community lasted several minutes and involved no notes. The officials ascribed the difference to a difference in culture, where A-10s are closely tied to ground forces and other fighters generally are not. A-10s are also better positioned to support Joint Terminal Attack Controller training going forward as Joint Terminal Attack Controllers expand their training focus to, once again, include CAS in contested environments, according to Air Force officials. In addition, officials from EUCOM and U.S. Pacific Command said partner nations often request A-10 support for their Joint Terminal Attack Controller training, and this support is an important component of their theater cooperation efforts. John Pendleton, (202) 512-3489 or pendletonj@gao.gov. In addition to the contact named above, Michael Ferren, Assistant Director; Tracy Barnes, Laurie Choi, Nicolaas Cornelisse, Travis Masters, Amie Lesser, Karen Richey, Michael Silver, Matthew Spiers, Erik Wilkins- McKee, and Edward Yuen made key contributions to this report.
DOD faces difficult decisions on how to best balance current demands and future needs within fiscal constraints. Decisions regarding the A-10 aircraft exemplify the difficulty. In the fiscal year 2015 budget request, DOD and the Air Force prioritized modern multi-role aircraft and proposed divesting the A-10 fleet, but Congress prohibited this action. DOD and the Air Force have continued to propose divesting the A-10 in two subsequent budget requests. The National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to review the A-10 divestment proposal. This report reviews the extent to which (1) the Air Force and DOD have quality information needed to understand the implications of A-10 divestment; and (2) the Air Force followed best practices when estimating cost savings from A-10 divestment and evaluating alternatives. GAO analyzed agency documents and interviewed knowledgeable officials for this review. The Department of Defense (DOD) and Air Force do not have quality information on the full implications of A-10 divestment, including gaps that could be created by A-10 divestment and mitigation options. While A-10 pilots are recognized as the Air Force experts in providing close air support (CAS) to friendly forces, the A-10 and its pilots also perform other missions that are important to ongoing operations or to combatant commander operational plans and divestment will result in reduced capacity and capability in these other areas. The Air Force is taking a number of steps to try to mitigate any potential negative impacts from its proposed A-10 divestments. However, the Air Force has not established clear requirements for the missions the A-10 performs, and in the absence of these requirements, has not fully identified the capacity or capability gaps that could result from the A-10 divestment. Without a clear understanding of the capability or capacity gaps and risks that could result from A-10 divestment, it is also unclear how effective or necessary the Air Force's and the department's mitigation strategies will be. For example, although the Air Force has several efforts underway to generally mitigate the loss of capabilities that would result from A-10 divestment, it has not identified how or if it will replace the A-10's role in combat search and rescue missions. Depending on the specific mitigation strategy chosen, the Air Force may have to address a number of different secondary impacts that could affect its ability to execute existing missions. The A-10 is one example of a challenge DOD could continue to face as it balances current needs against investing in the future force to replace aging systems. For example, in June 2014, GAO reported on a Navy challenge in balancing current capability and capacity with future modernization needs. Overall, the department does not have guidance to ensure that the services and DOD are collecting quality information to inform divestment decisions on major weapon systems before the end of their service lives. Without quality information that fully identifies gaps and associated risks resulting from divestment that can be used to develop mitigation strategies, DOD and the Air Force may not be well-positioned to best balance current demands and future needs. According to the GAO Cost Estimating and Assessment Guide , a high-quality, reliable cost estimate is comprehensive, well-documented, accurate, and credible. GAO's analysis found that the Air Force's cost estimate for its fiscal year 2015 divestment proposal partially met best practices for being comprehensive, minimally met best practices for being well-documented and accurate, and did not meet best practices for being credible. Additionally, Air Force officials stated they used similar practices when developing fiscal years 2016 and 2017 budget requests that included A-10 divestment. As a result, the Air Force cannot ensure that it has a reliable estimate of the cost savings it would generate by divesting the A-10. Further, without developing a reliable estimate, the Air Force does not have a sound basis from which to develop and consider alternatives to achieve budget targets, such as making adjustments to other fighter-attack programs or mission areas like air superiority or global strike. This is a public version of a classified report GAO issued previously. It excludes classified information which described specific intelligence assessments, scenarios, and operational details. With regard to the A-10, GAO recommends that the Air Force fully identify mission gaps, risks, and mitigation strategies, and also develop high-quality, reliable cost estimates of the savings from divestment before again proposing to divest its A-10 fleet, and that DOD establish quality information requirements to guide major weapon system divestments. DOD non-concurred with the recommendations, but GAO continues to believe that they remain valid as discussed in the report.
DOD’s Real Property Management Program is governed by statute and by DOD guidance documents that establish accountability for real property and requirements for financial reporting. These laws and guidance documents require DOD and the military departments to maintain certain data elements about their facilities to ensure efficient property management. Three DOD guidance documents—DOD Directive 4165.06, DOD Instruction 4165.14, and DOD Instruction 4165.70— assign responsibilities for managing DOD’s real property inventory to a number of organizations, including the Under Secretary of Defense (Acquisition, Technology and Logistics), and the military departments. DOD Directive 5110.4 assigns WHS responsibility for managing the DOD leased facilities within the National Capital Region that are not managed by the military departments. For real property accountability, DOD Instruction 4165.70 provides WHS with the same responsibilities as the military departments. DOD Directive 4165.06 assigns overall responsibility for DOD’s real property, including its leased assets, to the Under Secretary of Defense (Acquisition, Technology and Logistics) and specific responsibilities to the three military departments. DOD leases are categorized by four real property types: (1) land; (2) buildings (roofed and floored facilities enclosed by exterior walls and consisting of one or more levels that are suitable for single or multiple functions); (3) linear structures (facilities whose function requires that they traverse land [e.g., runway, road, rail line, pipeline, fence, pavement, electrical distribution line] and are reported by a linear unit of measure); and (4) structures (facilities other than buildings or linear structures that are constructed on or in the land, e.g., tower, storage tank, wharf, pier). DOD manages its real property lease data by collecting and compiling designated asset-level data into RPAD, which is the single authoritative source for all data on DOD’s real property inventory. RPAD includes real property records for owned and leased assets that are directly managed by the military departments and WHS. DOD Instruction 4165.70 requires the military departments and WHS to keep accurate records of the real property assetsincluding leased facilitiesunder their jurisdiction, custody, and control. It also makes DOD real property administrators accountable for maintaining a current inventory count and up-to-date information about the cost, functional use, status, condition, and utilization of each real property unit in the department’s real property inventory, among other things. DOD Instruction 4165.14 requires that the annual real property inventory submissions from the military departments and WHS comply with DOD’s Real Property Information Model, which provides the framework for all real property data and any associated business rules. The model contains nearly 240 data elements that are to be maintained in RPAD and the data dictionary for using these elements. Each of the military departments maintains its own real property inventory system to track owned and leased assets that it manages. WHS uses a spreadsheet based on DOD’s Real Property Inventory Requirements to manage DOD leased facilities in the National Capital Region that are not managed by the military departments. At the end of each fiscal year, the military departments and WHS are to transmit data from their real property inventory systems to DOD for consolidation in RPAD, and the data is to be certified by the military departments’ and WHS’s real property officers as being as accurate and complete as possible. DOD has a verification and validation process to determine whether each data element has an entry that is in the correct format and complies with established business rules. However, when data anomalies are discovered with the data provided by the military departments and WHS, DOD would need to provide the data back to the submitting organization for review and correction as necessary. GSA provides DOD use of facilities that GSA either owns or acquires under a lease on DOD’s behalf. GSA’s real property inventory systemReal Estate Across the United Statesis a real-time database that includes GSA-owned space and GSA-leased space that it manages and furnishes to DOD for use through occupancy agreements. The lease data from the military departments and WHS real property inventory systems that are included in RPAD and their GSA occupancy agreements should provide a complete picture of DOD’s leased real property assets. The data for the leased assets that are directly managed by the military departments and WHS are reported annually in DOD’s internal reports, such as its Base Structure Report, and in its submission to the Federal Real Property Profile. Similarly, GSA reports its own assets that DOD uses or assets that it acquires under a lease on DOD’s behalf in its annual submission to the Federal Real Property Profile. According to RPAD managers, to avoid duplication of assets in its annual submission to the Federal Real Property Profile, DOD does not report any assets that are leased from other federal agencies, including GSA. Figure 1 shows the real property inventory systems that provide the data for reporting on DOD’s real property assets that are directly leased by the military departments and WHS and leased assets that DOD uses through GSA occupancy agreements. The Department of Homeland Security, in coordination with GSA, has designated responsibility for the security of federal facilities. The Federal Protective Service, a component of the Department of Homeland Security’s National Protection and Programs Directorate, protects buildings, grounds, property, and the persons on the property under the control and custody of GSA. Although the Federal Protective Service is the primary agency responsible for protecting these facilities, the Department of Homeland Security may delegate the protection of buildings to tenant agencies such as DOD. The Pentagon Force Protection Agency, a defense agency and a component of DOD, provides force protection, security, and law enforcement to safeguard personnel, facilities, infrastructure, and other resources for the Pentagon Reservation and 16 DOD-leased facilities within the National Capital Region that are managed by WHS. However, the military departments provide security for the leased facilities they manage, including those facilities in the National Capital Region. Facility security assessments are conducted by the Pentagon Force Protection Agency and the Federal Protective Service, using standards set by the Interagency Security Committee. The Interagency Security Committee, which consists of over 100 senior-level executives from 54 federal agencies and departments, develops and evaluates security standards, and oversees the implementation of appropriate security measures in nonmilitary federal facilities in the United States. The Interagency Security Committee was established by Executive Order 12977, and the primary members represent 21 federal departments and agencies and the associate members represent 33 federal departments and agencies. DOD, the Department of Homeland Security, and GSA are primary members, and the Federal Protective Service is an associate member. The Interagency Security Committee defines the criteria and processes that those responsible for the security of a nonmilitary federal facility should use to determine its Interagency Security Committee baseline facility security level. A facility security level ranges from level I (lowest risk) to level V (highest risk) and is based on several factors, including the size of the facility, the number of occupants, the perceived threat to tenant agencies, the criticality of the tenants’ missions, and the facility’s symbolic value. While DOD is taking some steps to address data issues, it cannot fully determine the number, size, and costs of its leases because RPAD contains some inaccurate and incomplete data. The RPAD data show that DOD had 5,965 lease records in fiscal year 2011 and 5,538 lease records in fiscal year 2013 that were within the scope of our review. The majority of the lease records in both fiscal years were reported by the Army. These RPAD records include interests in real property that DOD obtains from private organizations, GSA, and state organizations. Based on our review of selected data elements in RPAD leasing records, we found that RPAD contained inaccurate data due to at least one violation of established business rules in 900 (15 percent) of the 5,965 records in fiscal year 2011 lease records. In fiscal year 2013 data, we found at least one violation of business rules in 541 (10 percent) of the 5,538 of lease records. Most of these errors were in the Army’s lease records; however, the Army reported to us that it is aware of these issues and is taking steps to correct future data. We also found that about 5 percent of the Army’s lease records were not included in RPAD for fiscal year 2011 and fiscal year 2013. Furthermore, we examined a statistical random sample of RPAD lease records for fiscal year 2013 and found that there were some inconsistencies in the lease data between RPAD, the military departments, and WHS lease records. Specifically, for one of the data elements we reviewed involving lease costs, we found that 13 percent of the Army sample records in RPAD were inconsistent with the source records in the Army’s real property systems. We also performed a more in-depth review of the Army’s records for multiple assets on a single lease and found that the Army was not following DOD’s guidance for reporting on these types of leases. Lastly, although WHS is following DOD’s guidance for reporting the square footage of buildings, our review of the WHS lease records found that the square footage of buildings that have multiple tenants under separate leases was overstated for each lease recorded in RPAD. Cumulatively, these inaccurate and incomplete data are indicators of the unreliability of certain RPAD data on the number, size, and cost of DOD’s leased assets. In our review of select RPAD data elements used to determine the costs, size, and status of DOD’s leased assets, we found that RPAD contained inaccurate data due to at least one violation of established business rules in 900 (15 percent) of the 5,965 records from fiscal year 2011 lease records. In fiscal year 2013 data, we found at least one violation of business rules in 541 (10 percent) of the 5,538 of lease records. These rules are identified in DOD’s Real Property Inventory Data Element Dictionary. While we assessed all DOD RPAD lease records for fiscal year 2011 and fiscal year 2013, the majority of errors were in the Army’s lease records. For example, we found that for some lease records, the lease base annual dollar amount (hereafter referred to as “annual rent,” which is the amount DOD pays annually for the use of a real property asset, excluding additional costs such as utilities and parking, among other things) was greater than the lease annual cost amount (hereafter referred to as “annual rent plus other costs,” which is the annual rent plus any additional costs defined in the lease, such as utilities and parking, among other things). The business rule requires that the annual rent be less than the annual rent plus other costs. We found that 545 of the 5,965 lease records for fiscal year 2011 and 449 of the 5,538 lease records for fiscal year 2013 had cost data that did not meet this rule. We also found that cost data were missing from other lease records. Another DOD business rule states that for every leased asset there must be an annual rent and an annual rent plus other costs recorded and the amount in each data element must be greater than or equal to zero; the business rule does not specify that the annual rent or annual rent plus other costs may be empty or null. We found 250 lease records for fiscal year 2011 and a small number (9 records) for fiscal year 2013 had data missing for the annual rent. In addition, according to one DOD business rule, the status of the lease must not be recorded as “active” or “hold” when a termination date for the lease has also been recorded. However, we found that 139 lease records of the 5,965 lease records for fiscal year 2011 and 113 lease records of the 5,538 lease records for fiscal year 2013 showed a lease status of “active” or “hold” even though a termination date was recorded in the system. Therefore, the actual status of these fiscal year 2011 and 2013 leases in RPAD is uncertain. Cumulatively, the lack of accurate lease data that meet the business rules identified for the lease status and cost data elements for DOD’s leased assets hampers the department’s ability to accurately report on the number of leased assets that are still being used (i.e., active and hold leases) and the overall cost of its leases. According to the 2013 Real Property Inventory (RPI) Reporting Guidance, RPAD will accept all submitted data regardless of the outcome of verification and validation, except in certain instances, such as substantially incomplete records that render identification of the asset highly improbable. The RPAD manager told us that errors or warnings identified in the verification and validation process are submitted to the military departments and WHS for the opportunity to review and correct since their systems are considered to be the source of the data. However, our analysis of the lease records in RPAD found that the errors and warnings identified by the verification and validation process are not always corrected by the military departments and WHS in a timely manner. For example, we found that 341 (63 percent) of the 545 lease records from fiscal year 2011 records that did not meet the established business rule that requires the annual rent to be less than annual rent plus other costs had not been corrected in the fiscal year 2013 RPAD records. In our discussions with U.S. Army Corps of Engineers officials, who manage the Army’s rental facilities database, these officials stated that they were aware of many of the data anomalies we found in their records and are taking steps to improve the Army’s real property data. These officials told us they began a data quality-management initiative in fiscal year 2011 to improve the quality of data entries in the Army’s rental facilities database and to capture lease records that should be accounted for in their system. One of the primary purposes of this initiative was to update records with missing data elements. Because Army officials are aware of these issues and are taking steps to improve the data quality, we are not making a recommendation on this issue at this time. Although the military departments and WHS maintain their own real property management systems and submit data on their leased assets to DOD, we found that the lease records in RPAD do not always include all of the data submitted. Our analysis of fiscal years 2011 and 2013 data submitted by the U.S. Army Corps of Engineers officials who manage the Army Rental Facilities Management Information System (hereafter referred to as the Army’s rental facilities database) shows that some lease records that the officials submitted to Army headquarters are not in RPAD. We compared the Army’s lease records in RPAD to the lease records maintained in the Army’s rental facilities database to determine the completeness of the Army’s data in RPAD. We found records in the Army’s database that were not in RPAD for fiscal years 2011 and 2013. Army officials who manage the Army’s Headquarters Installation Information System (hereafter referred to as the Army’s headquarters reporting system)—the system that the Army uses to submit data to RPAD—provided us documentation showing that 237 (5.1 percent) of the 4,615 lease records from fiscal year 2011 and 197 (4.9 percent) of the 4,027 lease records from fiscal year 2013 were for assets that had been disposed of. According to the Army officials, these records for disposed assets should have been recorded in RPAD. The officials could not provide an explanation for why records submitted by the Army for these fiscal years were not in RPAD. According to the RPAD managers, these records may have been omitted because of errors in transmitting the data. Nevertheless, because these disposal records had been omitted from RPAD for fiscal years 2011 and 2013, DOD was not in a position to accurately report on the number of disposed leased assets in its Federal Real Property Profile submission. In addition, we found Army lease records for land parcels in the Army’s rental facilities database that were not included in RPAD. Specifically, we found 703 (15.2 percent) of the 4,615 lease records from fiscal year 2011 and 370 (9.2 percent) of the 4,027 lease records from fiscal year 2013 were not included in RPAD. Army officials told us that they did not include Army land parcel records in their submissions to RPAD because the accuracy of these records had not been verified. After we discussed these issues with Army officials, they noted that the Army has an ongoing effort to review land parcel data and update its records so that these data can be included in the Army’s future RPAD submissions. Because of this ongoing effort, we are not making a recommendation on this issue at this time. Based on the results of our statistical random sample of the fiscal year 2013 RPAD lease data, we found inconsistencies between RPAD and the military departments’ (almost entirely the Army’s) and WHS’s lease records for some data elements related to cost and size of lease assets. We analyzed a statistical random sample of 132 lease records that had been submitted by the military departments and WHS to RPAD for fiscal year 2013. Based on a 95 percent threshold for determining whether the RPAD data matched the data the military departments and WHS provided to us as their RPAD submissions for fiscal year 2013, the results of our sample showed that all but one of the data elements we reviewed had over a 95 percent matching rate DOD wide. Therefore, we concluded that the RPAD data were sufficiently reliable for the data elements related to identifying information about the leases, such as instrument number, real property asset type, or service reporting component. However, the annual rent plus other costs data element that is required to calculate the cost of DOD’s leases had a match rate of about 90 percent, which is significantly lower than our 95 percent threshold. In the sample data we reviewed, the Army, which had the largest number of RPAD lease records, is the only DOD component showing inaccuracies for this data element. We found that 11 (13 percent) out of 84 of the Army sample records had data for the annual rent plus other costs data element that were inconsistent with the source data contained in Army’s real property systems for instances where there are multiple assets on a single lease. Given the relatively low match rate for the annual rent plus costs data element, we determined that we could not reliably report on the cost of DOD leases. In addition to our analysis of the sample RPAD records, we performed additional steps to determine why some inaccuracies were occurring in the data. We found that the Army is not following guidance for reporting data when multiple assets are included in a single lease. In addition, we found the square footage for some leased space is overstated in RPAD. Details of these problems and the reasons they occurred are discussed in the following sections. Based on the results our sample, we performed a more in-depth review of the Army’s RPAD records and found that the Army is not following DOD’s guidance for reporting the annual rent plus other costs for multiple assets on a single lease. The 2013 DOD Real Property Inventory (RPI) Reporting Guidance requires that the military departments and WHS provide a breakout of the annual rent plus other costs for each asset on the same lease. However, we found that the managers of the Army’s rental facilities database entered the total annual rent plus other costs for all assets on a single lease, rather than a breakout of the individual annual rent plus other costs for each asset, thereby overstating the annual rent plus other costs for each asset in its fiscal year 2013 submission to RPAD. For example, in fiscal year 2013, the annual rent plus other costs for a general administrative office space building was $90,885 and the annual rent plus other costs for a parking garage facility, which was included on the same lease, was $27,388. However, the Army’s real property systems showed the total annual rent plus other costs of $118,273 for each asset. The 2013 Real Property Inventory (RPI) Reporting Guidance further states that if lease cost per asset is not computed prior to submission, cost must be recalculated by the RPAD managers prior to putting data into their system. We found that the Army had 456 records (about 11 percent) (208 unique occurrences) out of a total 4,210 records that represent multiple assets associated with a single lease. Additionally, according to the managers of the Army’s rental facilities system, a cost per asset is captured under a different data element in their system. However, those costs still do not match the per asset cost computed by DOD. Furthermore, according to the managers of the rental facilities database, they were unaware of this requirement. They stated that they have consistently been instructed by U.S. Army Corps of Engineers officials to enter the total cost of the lease for annual rent plus other costs data element. The manager of the Army’s headquarters reporting system stated that the manager’s office was unaware of these occurrences prior to our discussions with the manager and that the DOD guidance is clear on how these costs should be calculated. The total cost of the Army’s leased assets will continue to be overstated in its RPAD submission until the Army consistently begins following the DOD real property inventory reporting guidance for multiple assets associated with a single lease. In our review of the various data elements used to record information related to DOD leased assets, we found that DOD’s Real Property Information Model does not include a data element that captures the square footage associated with a given lease record. As a result, the 2013 DOD Real Property Inventory (RPI) Reporting Guidance does not address how the square footage should be documented for each lease. Rather, only the total square footage of a real property asset (which may include more than one lease) can be reported in RPAD. The lack of a data element capturing the square footage for each lease of space in a single building and the absence of any related guidance results in DOD not having visibility of the actual square footage associated with each lease. This is problematic for cases in which there is more than one DOD tenant in a building because the lease record for each tenant shows the total square footage of the building, rather than the space that each tenant actually occupies. As a result, the data from RPAD that identifies the complete real property record for DOD leased assets (where there is more than one lease for that asset) overstates the square footage associated with each lease. For example, for a building that has one tenant that occupies 27,975 square feet of space and another tenant that occupies 2,246 square feet, RPAD shows 30,221 square feet for each lease rather than the space that each tenant occupies. As a result, the RPAD data would indicate that 60,442 square feet are being leased, rather than the actual 30,221 square feet. Based on our review of the WHS sample records that included some leases of buildings with multiple tenants, we found that the square footage for 4 (33 percent) of 12 sample lease records were overstated and the correct amount could not be determined by the data included in RPAD. Additionally, our review of the entire WHS data for fiscal year 2013 shows that WHS was managing leased space in 88 buildings within the National Capital Region, and 18 buildings (about 20 percent) had multiple leases and showed the total square footage of a building rather than individual square footage associated with a specific lease. While RPAD is the single authoritative source for all data on DOD real property inventory, RPAD data cannot be used to determine the amount of square footage associated with a given lease when there are multiple tenants occupying space in the same building. Instead, this can only be determined by the WHS officials who keep track of the square footage for each lease separately in their leased facility records under the data element identified as “WHS Re-bill.” Standards for Internal Control in the Federal Government emphasize the need for federal agencies to establish plans to help ensure that goals and objectives can be met, including compliance with applicable laws and regulations. Still, DOD does not have a plan in place to address the omission of the square footage for each lease separately in their leased facility records. Until DOD includes a data element to capture the actual square footage occupied by each tenant and revises the related reporting guidance, RPAD will continue to overstate the square footage for buildings with multiple tenants. DOD is currently implementing a presidential memorandum and a series of OMB memorandums instructing federal agencies to maintain or reduce both owned and leased space; however, DOD is not projecting any significant reductions in its leased space. Additionally, while DOD has vacated some costly leased space with the implementation of the 2005 BRAC recommendations, we found some instances in which DOD has subsequently reoccupied the previously vacated space, potentially offsetting any savings attributable to implementation of the relevant BRAC recommendations with new lease and security costs. Furthermore, our works shows that potential future force structure reductions exist that may offer DOD and the military services an opportunity to further reduce reliance on leased space. While DOD has taken some actions to reduce its leased space, we found that DOD has projected minimal change in its overall lease activities. Specifically, in its October 2013 report, Revised Real Property Cost Savings and Innovation Plan for FY13-15 (commonly referred to in DOD as its Freeze the Footprint report), DOD stated that most of the military departments did not anticipate significant year-to-year changes in their current leasing activities. According to OMB Management Procedures Memorandum No. 2013-02, which clarified the implementation of OMB’s Freeze the Footprint policy, federal agencies were not to increase the total square footage for domestic office and warehouse space beyond their fiscal year 2012 baseline numbers, which were calculated based on fiscal year 2012 Federal Real Property Profile data, fiscal year 2012 GSA occupancy agreements, and fiscal year 2012 agency leasing agreements (for each agency that has independent leasing authority). In its October 2013 Freeze the Footprint reports, DOD stated that many long-standing leases already had built-in options for renewal, and that in a climate of stringent funding for the purchase or lease of new real property, and limited options for relocation, renewal was often the most cost-effective option. The following highlights some of the concluding comments from the services included in their Freeze the Footprint reports. Army: The Army reported that in fiscal year 2013 it was not below the fiscal year 2012 Freeze the Footprint baseline threshold, but it expected to be below the threshold by the end of fiscal year 2015. The Army reported that its leased office and warehouse space (about 1.9 million square feet) represented 41 percent of the Army’s growth in its leased footprint for fiscal year 2013 and 81percent of the projected offsets in fiscal years 2014 and 2015 (about 5.2 million square feet). The Army reported that it intended to achieve its goal of reducing office and warehouse space to fiscal year 2012 levels through a program focused on eliminating new lease growth, significantly reducing existing leases, and minimizing new construction of office and warehouse space. Navy: The Navy reported that approximately 55 leases had an option to renew during fiscal years 2013 through 2015 and that when these leases expire, the requirement for each lease would have been revalidated by the occupying activity, with a goal of reducing the overall square footage where practicable. However, the Navy projected that it would acquire leased space under GSA occupancy agreements totaling approximately 144,000 square feet, at a cost of about $37 million during fiscal year 2013. The Navy also reported that these additions represented no change in the square footage for its occupancy agreements, since the square footage associated with these leases was within the fiscal year 2012 baseline. Air Force: The Air Force projected a decrease of about 112,000 square feet for five leased-space offices during fiscal years 2013 through 2015, for a total annual cost reduction of $1.3 million a year. The Air Force also reported that it expected several leases to be terminated early due to completion of construction projects and changes in mission requirements, but stated that the exact number of leases was not yet known at the time its report was issued. WHS: The WHS report stated that there were no significant changes to WHS’s office and warehouse footprint from fiscal year 2013 to fiscal year 2015. WHS’s footprint consisted of space occupied by DOD in facilities in the National Capital Region that was leased by GSA or the U.S. Army Corps of Engineers and was accounted for within the footprints of these two organizations. WHS reported that it had 88 buildings in its inventory with a total of approximately 6.1 million square feet of leased space. WHS stated that its facility-management strategy focused on establishing a policy to monitor growth, reducing property and facility leases, where possible, and reducing and consolidating underutilized buildings, among other things. According to WHS’s Freeze the Footprint report, all requests for new space were reviewed for compliance with leased space standards and, when possible, vacant space within the WHS footprint was used to satisfy requests for new space. If new requests for space could not be met within the current footprint, then WHS inquired as to the availability of space on military installations in the National Capitol Region. DOD officials stated that they initially relocated DOD activities from leased space, particularly within the National Capital Region, to government-owned space (in some cases to newly constructed facilities) as outlined in the 31 recommendations approved by the 2005 BRAC Commission. However, we found that DOD subsequently reoccupied some of the same leased space after implementing the BRAC recommendations; thereby offsetting some of the reductions achieved through the BRAC process. DOD’s justification to the 2005 BRAC Commission for some of these recommendations was that leased space is more costly than government-owned space and the existing leased facilities did not meet antiterrorism/force protection standards. In a March 2013 report, we stated that although DOD reported to the BRAC Commission that it would vacate about 12 million square feet of leased space, it did not track the extent to which it had vacated this space. During this review, we found 12 buildings managed by WHS within the National Capital Region that have 27 tenants in a total of approximately 1.1 million square feet of leased administrative office space previously vacated by other DOD organizations as a result of implementing the 2005 BRAC recommendations involving leased space. WHS officials cited a variety of reasons why this space was subsequently reoccupied. For example, according to these officials, some of the space vacated as a result of BRAC was subsequently reoccupied because of new space requirements for organizations such as the Office of the Special Inspector General for Iraq Reconstruction and the Joint Improvised Explosive Device Defeat Organization. Additionally, the WHS officials told us that the Defense Intelligence Agency, Defense Advanced Research Projects Agency, and the Defense Health Agency needed additional space and facilities due to changes in mission requirements and consolidation of satellite locations. Furthermore, these officials stated that the new risk- based Interagency Security Committee standards provide a more flexible risk-based antiterrorism force-protection standard, which allowed some of the leased space that was previously vacated to be reoccupied and meet the new standards. In March 2013, we reported that Army officials did not track leases that the Army had vacated as a result of BRAC because those leases were typically long term and could not be terminated at the time BRAC was being implemented. Rather, the Army simply filled such space with other service functions not included in BRAC. We also reported that some leased space may have been vacated as a result of ongoing DOD initiatives other than BRAC. Therefore, according to DOD, it was difficult to measure any net reduction in leased space or to identify what proportion of any reduction was directly due to BRAC actions. During the course of this review, we found that DOD has not assessed effects of future force reductions on existing leased facilities and, as a result, DOD may miss opportunities to reduce its leased space. In December 2013, we reported that the Army planned to inactivate 10 Brigade Combat Teams on some of its installations, which likely would result in available administrative office space once these force structure reductions occur in fiscal year 2017. For this review, we conducted an analysis of the fiscal year 2013 RPAD lease records and found six Army leases for general administrative space that are within 50 miles of the installations with projected force structure reductions. Five of the leases are Army mission-support leases that are managed by the U.S. Army Corps of Engineers, and the remaining lease, near Fort Hood, Texas, is managed by the Army Reserve. According to the RPAD data for fiscal year 2013, the annual rent plus other costs for these six leases of general administrative space totaled approximately to $4.1 million for about 249,000 square feet of leased space. See table 1 for details of our analysis on the leases that are in close proximity to the Army installations with projected unutilized or underutilized space. DOD Instruction 4165.70 directs the Secretaries of the military departments to maintain a program that monitors the use of real property to ensure that it is being used to the maximum extent possible consistently with both peacetime and mobilization requirements. It is important that DOD plan ahead when it anticipates force reductions, in order to properly assess its future infrastructure requirements. However, when we shared our analysis with Army officials, they stated that they had not yet conducted such an assessment. According to U.S. Army Corps of Engineers officials, it would take approximately 2 years to conduct an assessment that would determine whether DOD-owned property, other federally owned property, or leased property is the best resource to accommodate the requirements of the DOD entity that needs space. Based on the analysis we shared, Army officials stated that they planned to take actions to review some of their leases due to the force reductions at Army installations with Brigade Combat Teams. Each of the leases we identified represents an opportunity for DOD to determine what effects future force reductions will have on unutilized or underutilized facilities on its installations that could potentially be made available to accommodate DOD tenants currently occupying leased space off the installation. Subsequently, in commenting on a draft of this report, DOD noted that the Army had reviewed the individual asset records for the six Army leases that we identified as being in close proximity to Army installations. Army officials, though, told us that further review would be required to determine whether relocation of the organizations in that leased space to Army-owned installations would be possible. Also, DOD noted in its comments that the Army had published a new execution order in March 2015 that requires commanders to plan and implement footprint reductions, giving priority to installations expected to see force reductions, and specifically emphasizing moving Army activities out of leased space, where fiscally prudent. Army officials told us that the implementation of this new execution order, once complete, should be expected to find and review assets such as we found in our analysis. Although DOD Instruction 4165.70 directs the Secretaries of the military departments to maintain a program that monitors the use of real property to ensure that it is being used to the maximum extent possible consistently with both peacetime and mobilization requirements, we found that that officials do not share information on available unutilized or underutilized space that can potentially be used when there is a new lease requirement or when a lease is up for renewal. While each of the military departments told us that it has a process for requesting leased space, we found that officials managing leased space did not always have information on unutilized or underutilized space. We conducted an analysis of the 5,566 lease records in RPAD for fiscal year 2013 (the most recent year for which data were available) and found that there were 407 records for general administrative space. The total annual rent plus other costs for these leases was approximately $326 million for about 17.6 million square feet of leased space. According the military department officials, the process of requesting leased space takes several steps to ensure that leased space is used efficiently, including assessing whether DOD-owned or government-owned space is available within a 50-mile radius of a lease location. For example, Navy officials told us that the Navy pursues leasing space only when it has determined that suitable government-owned space does not exist. Additionally, Air Force and Army officials provided us with informational checklists that are to be used when acquiring or renewing leases, including surveying the availability of government-controlled or DOD-owned space within a 50- mile radius of the lease location. However, as we recently reported in June 2015, officials at the Office of the Secretary of Defense, service, and installation levels told us that actively pursuing potential tenants would be an administrative burden on the installations, especially if there is not a significant amount of available space on the installation. In our discussions with Army officials, we found that the U.S. Army Corps of Engineer officials who manage the Army’s rental facilities database had not been contacted by the installation officials with projected unutilized or underutilized space due to the inactivation of the Brigade Combat Teams on their installations. Army officials started their review of these specific leases only after we provided them the findings of our analysis. Furthermore, in June 2015, we reported that DOD officials at the Office of the Secretary of Defense, service, and installation levels said that they do not conduct outreach to communicate information regarding unutilized and underutilized space on military installations in part because the installations primarily focus on supporting missions within DOD. Additionally, in our discussion with DOD officials about potential consolidation opportunities, they stated that there are many other factors to be considered before an actual decision can be made to move an activity from leased space onto an installation. For example, in some cases, the installation’s infrastructure would need to be evaluated to determine whether it could accommodate additional personnel or whether the installation’s mission would be affected if space is provided to non- mission-related tenants. DOD officials also stated that the unutilized or underutilized space on an installation would have to be assessed to determine whether space is actually usable or in poor condition, rendering it unusable. In addition, the costs to move out of existing lease space and reconfigure unutilized or underutilized space to meet new tenants’ needs must be determined, which in some cases could be a costly expense. While we recognize that each of these factors are important when making a decision to vacate leased space in lieu of DOD-owned space, our analysis demonstrates that some opportunities to reduce reliance on leased space may be forthcoming to the extent that force structure reductions or other indicators of potentially available space occur in the future. Without the installation officials routinely sharing information on unutilized and underutilized space, DOD leasing agents will not know whether government owned space; thereby leaving DOD is at risk of relying on more costly leased space when government-owned space may be available. DOD does not have oversight of information about the facility security assessments for all of its leased facilities acquired through GSA. Facility security assessments are conducted by the Pentagon Force Protection Agency and the Federal Protective Service, using standards set by the Interagency Security Committee. Interagency Security Committee standards state that facility security assessments are the process of evaluating credible threats, identifying vulnerabilities, and assessing the consequences of undesirable events. Interagency Security Committee standards require that a facility security assessment be conducted at least once every 5 years for security level I and II facilities and at least once every 3 years for security level III, IV, and V facilities. Our analysis of data on the scheduling and completion of facility security assessments by the Federal Protective Service identified late assessments and incomplete and inaccurate data. We found that the Pentagon Force Protection Agency had completed the facility security assessments for the leased facilities for which it is responsible between August 8, 2013 and January 31, 2014. Prior to December 2012, DOD leased facilities were assessed according to standards set in DOD’s Unified Facilities Criteria (UFC) 4-010-01. On December 7, 2012, the Deputy Secretary of Defense issued a memorandum incorporating the Interagency Security Committee standards into the Unified Facilities Criteria for all off-installation facility space leased by DOD and for space occupied by DOD tenants in buildings owned, operated, or leased by GSA. Current tenants as of December 7, 2012 were instructed to apply the Interagency Security Committee standards in accordance with existing or renewed lease agreements to the extent practicable. In August 2013, the Interagency Security Committee’s standards were updated in The Risk Management Process for Federal Facilities: An Interagency Security Committee Standard. The latest version of the Interagency Security Committee standards provides an integrated, single source of physical security countermeasures or actions to take, such as installing vehicle barriers, to mitigate risks identified through a facility security assessment. According to Federal Protective Service officials, risk acceptance is an allowable outcome of the Interagency Security Committee’s risk management process for federal facilities standard if it is documented and the project documentation clearly reflects the reason why the necessary level of protection cannot be achieved. Our analysis of fiscal years 2011 and 2013 data from the database the Federal Protective Service used to track scheduled and completed facility security assessments for facilities leased by DOD through GSA identified three issues: (1) some assessments were not scheduled within the required time frames; (2) data on assessments completed, as required, is unknown; and (3) dates for completed and next scheduled assessments were not always recorded. The Federal Protective Service’s schedule for fiscal year 2011 includes 500 leased facilities, and its fiscal year 2013 schedule includes 484 leased facilities. However, we found that DOD does not have oversight over the Federal Protective Services’ facility security assessment data and related results for the leased facilities occupied by DOD tenants for the following reasons: Some assessments were not scheduled within required time frames. For fiscal years 2011 and 2013, we found a number of instances in which the Federal Protective Service did not complete facility security assessments within the required time frames required by the Interagency Security Committee standards. The Federal Protective Service’s schedule of assessments included some facilities for which the facility security assessments had been scheduled beyond the 3- year or 5-year requirements. For example, there were 12 out of 500 facilities in fiscal year 2011 and the same 12 out of 484 facilities in fiscal year 2013 that showed scheduled next assessment dates beyond the required time frame. Federal Protective Service officials told us that these late assessment dates are most often the result of a backlog in completing facility security assessments and that they planned to complete past due assessments for level III and IV facilities by the end of fiscal year 2014 and level I and II facilities by the end of fiscal year 2019. In June 2015, these officials revised their timetable for completing the backlog of assessments and told us that they are now scheduled to complete the assessments for all facilities within the next 5 years, by 2020. Until assessments are completed, DOD tenants could be exposed to unknown risk, because current facility security assessments have not been conducted. The number of facility security assessments completed as required by the Federal Protective Service is unknown. The Federal Protective Service’s database did not maintain complete and accurate records for scheduled and completed facility security assessments as of the time of our review; therefore, the exact number of assessments that had previously been completed as required as well as whether the completed assessments were conducted in the required time frames was unknown. The Federal Protective Service’s facility security database overwrote previously recorded assessment dates when new information was entered into the database. Given the lack of historical data, we calculated that there were 113 out of 500 facility security assessments that should have been scheduled and completed prior to the end of fiscal year 2011. However, the Federal Protective Service’s data show that 3 assessments were completed in fiscal 2012, 109 assessments in fiscal year 2013, and 1 assessment in fiscal year 2014. Similarly, we found that 9 of 484 facility security assessments should have been scheduled and completed prior to the end of fiscal year 2013; however, 8 were completed in fiscal year 2014 and 1 in fiscal year 2015. In follow-up discussions with Federal Protective Service officials, they told us that their database has been recently updated to perform ad-hoc queries that will now identify historical facility information for which no dates have been recorded for either the last completed assessment or the next scheduled assessment of a facility. Dates for completed and next-scheduled assessments were not always recorded. We found instances in which no dates had been recorded for either the last completed assessment or the next scheduled assessment of a facility. Specifically, 133 of 500 facilities for fiscal year 2011 were missing these dates. Furthermore, these dates were still missing in fiscal year 2013 for the same 133 facilities. This means that there could have been at least a 3-year period during which there were no data recorded on the scheduling or completion of assessments for these 133 facilities. Federal Protective Service officials told us that these data were not recorded either because the assessments had not been completed as scheduled or the information in the schedule had not been updated by the Federal Protective Service region responsible for completing the assessments. As a result, we were unable to determine whether the assessments had been scheduled or completed within the required time frames or whether the Federal Protective Service knows the date of the next scheduled assessment. Standards for Internal Control in the Federal Government states that internal controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. DOD has taken actions to establish policies and procedures, such as implementing the Interagency Security Committee standards into its Unified Facilities Criteria, to help protect and secure personnel in its off-installation leased facilities. However, it has not provided oversight of these processes to ensure that they are followed as intended. According to Interagency Security Committee standards, results of facility security assessments are briefed by the organization responsible for physical security of the facility to the Facility Security Committee, which consists of representatives of the DOD tenants and any other federal tenants in the facility, the security organization, and the owning or leasing department or agency. During the Facility Security Committee’s deliberation process for determining the security level for its facility, the tenants’ organizations may consult with their headquarters security representatives about the implementation of the countermeasures recommended to mitigate identified risks or their headquarters financial offices about the cost for implementing the countermeasures. According to DOD Directive 5143.01, the Under Secretary of Defense (Intelligence) is responsible for developing physical security policy and guidance, and overseeing the DOD physical security program, among other things. However, an Office of the Under Secretary of Defense (Intelligence) official stated that there is no single entity within DOD that is responsible for ensuring that all DOD leased facilities are properly secured, including that facility assessments are completed in the required time frames according to the prescribed standards. Furthermore, the Federal Protective Service is not required to report to levels of DOD higher than the tenant on whether it has completed required facility security assessments of DOD’s leased facilities. Moreover, DOD does not have and has not requested access to Federal Protective Service data on the scheduling and completion of the assessments. In response to a draft of this report, the Department of Homeland Security noted in its comments that officials from the Federal Protective Service stated that their agency’s database and software has been updated to consolidate all data into a single system for automated tracking and scheduling of assessments and queries. According to Federal Protective Service officials, at the time of our audit, their agency had not yet deployed its new system—Modified Infrastructure Survey Tool version 2.0—which includes the functionality needed to address the various issues we found. However, because DOD offices at levels higher than the tenant do not periodically request and obtain information from the Federal Protective Service, DOD is not in a position to know whether security assessments are scheduled and conducted as required. Without this oversight information, DOD does not have assurance that its leased facilities are secure. Additionally, better DOD oversight could prompt the Federal Protective Service to improve the facility security assessment data that it maintains. Our review of the Pentagon Force Protection Agency’s schedule for completing facility security assessment showed that the assessments for the limited number of DOD-leased facilities in the National Capital Region for which it has responsibility for security and law enforcement were completed within 1 year after DOD had adopted the Interagency Security Committee standards. Pentagon Force Protection Agency officials stated that all of these facilities are being assessed annually based on the Interagency Security Committee criteria for baseline facility security level determination, at a minimum. Furthermore, according to the officials, of the 16 DOD-leased facilities within the National Capital Region that are managed by WHS for which they have security responsibility, the results of the facility security assessments show that only 2 of the facilities are compliant with the Interagency Security Committee standards without accepting additional risk. Additionally, these officials stated that the remaining 14 facilities meet multiple requirements identified in the Interagency Security Committee standards, but not all of the requirements. According to a Pentagon Force Protection Agency official, if the remaining 14 facilities are not able to meet all requirements, the appropriate official for the primary tenant in the facility or a selected designee must determine whether the reported risk for these facilities is acceptable. DOD Instruction 4165.70 requires the military departments and WHS to keep accurate records of the DOD real propertyincluding leased facilitiesunder their jurisdiction, custody, and control to help ensure efficient management of real property assets. However, some of the lease data in RPAD, drawn from military department and WHS records, are incomplete and inaccurate. As a result, the RPAD data cannot be fully relied upon to determine the total number, size, and costs of DOD’s leased assets. Without complete and accurate data on its leases, DOD’s oversight of its leased assets and the quality of its external reports is weakened. Additionally, while the military departments have reported that they have initiatives under way to reduce leased space, greater opportunities are possible due to planned force structure reductions leading to increasing vacancies of on-installation facilities. We found examples of tenants leasing off-installation space nearby installations identified for force structure reductions by fiscal year 2017. If DOD does not require that the military departments evaluate these likely-to-be- vacated facilities in conjunction with leases being renewed—or before entering into new leases—DOD will not have reasonable assurance that it will be able to fully identify opportunities to vacate more costly leased space when appropriate and to move into DOD-owned space. Furthermore, although Pentagon Force Protection Agency officials have stated that they have obtained facility security assessments documentation from the Federal Protective Service for leased facilities within the National Capital Region, DOD is not requesting information on the status of the facility security assessments completed by the Federal Protective Service for all DOD-leased locations. Without periodically obtaining information on whether facility security assessments for its leased facilities are being completed in accordance with required standards and without access to the results of the assessments, DOD is not in a position to ensure that its tenants in leased space are secure. To improve DOD’s ability to oversee its inventory of leased real property, we recommend that the Secretary of Defense take the following two actions aimed at improving the accuracy and completeness of data in RPAD: Direct the Secretary of the Army to enforce DOD’s Real Property Inventory (RPI) Reporting Guidance, which states that for multiple assets associated with a single lease, the military departments and WHS must provide a breakout of the annual rent plus other costs for each asset on the same lease, to avoid overstating costs associated with such leases. Direct the Assistant Secretary of Defense (Energy, Installations and Environment) to modify the office’s Real Property Information Model to include a data element to capture the square footage for each lease of space in a single building and also make a corresponding change to its Real Property Inventory (RPI) Reporting Guidance to require that the square footage for each individual lease be reported when multiple leases exist for a single building, to avoid overstating the total square footage assigned to each lease in RPAD. To help reduce facility costs and reliance on leased space, we recommend that the Secretary of Defense direct the Secretaries of the military departments to require that their departments look for opportunities to relocate DOD organizations in leased space to installations that may have underutilized space due to force structure reductions or other indicators of potentially available space, where such relocation is cost-effective and does not interfere with the installation’s ongoing military mission. To improve DOD’s ability to ensure that its leased facilities are secure, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Intelligence) to request reports from the Federal Protective Service for all leased facilities on a periodic basis as determined necessary for oversight. At a minimum, the Under Secretary should request the results of the assessments, the date on which the last assessment was completed for each facility and the date for which the next scheduled assessment is planned, and information on whether these dates meet the time frames established by Interagency Security Committee standards. We provided a draft of this report to DOD, GSA, and the Department of Homeland Security for review and comment. DOD’s written comments are reprinted in appendix II of this report. In its comments, DOD concurred with our first recommendation that Secretary of the Army enforce DOD’s Real Property Inventory (RPI) Reporting Guidance to break out the annual rent plus other costs for each asset on the same lease to avoid overstating the costs associated with such leases. DOD also concurred with our fourth recommendation that DOD improve its ability to ensure that its leased facilities are secure and stated that it would collaborate with the Federal Protective Service to obtain the listing of the leased facilities the agency supports, monitor and provide oversight of the scheduling of the assessments, and review the results of the assessments. DOD did not concur with our remaining two recommendations, which are discussed in detail below. DOD also provided technical comments that have been incorporated, as appropriate. The Department of Homeland Security provided technical comments, which have been incorporated, as appropriate; and GSA had no comments in response to our draft report. In its written comments, DOD stated that we did not give the department full consideration of its efforts regarding leased space and that our draft leaves a misperception of the department’s commitment to efficient real property management. DOD also stated that our report mischaracterizes how DOD used the BRAC process to achieve space reductions and overstates the extent to which DOD reoccupied leased space vacated through the 2005 BRAC process. We disagree and believe that our report neither leaves such a misconception nor presents such a mischaracterization. As our report states, DOD proposed to the BRAC Commission 31 recommendations that involved relocating certain DOD activities from leased space to government-owned space and justified some of these recommendations by stating that leased space historically has higher overall costs than government-owned space and generally does not meet anti-terrorism force-protection standards. Our report also states that DOD reoccupied about 1.1 million square feet of leased space previously vacated through BRAC 2005, which DOD’s letter confirmed. DOD stated in its comments that prudent management includes consideration of available leased space to accommodate changing demands and new missions when adequate DOD-owned space is not available. We recognize that leasing is appropriate at times. Nevertheless, in its March 2015 testimony before Congress, DOD asserted that it had about 24 percent excess capacity prior to BRAC 2005 and that the department subsequently disposed of about 3 percent of this excess capacity through BRAC. In its comments, DOD did not indicate the extent to which the department reviewed remaining excess capacity, if at all, for use by the organizations that subsequently reoccupied the 1.1 million square feet of leased space. DOD also disagrees with our conclusion that reoccupying space that had been vacated through BRAC in order to achieve cost savings (i.e., vacating leased space and occupying less costly government owned space) offsets savings attributed to these BRAC recommendations. While DOD’s letter referenced new mission and lease consolidation opportunities as the rationale for reoccupying vacated space, it did not explain how incurring these new lease costs was not, in fact, an offset to any savings attributable to BRAC from having reoccupied space just vacated for the purpose of saving money, among other things. This is particularly significant because DOD expend appropriated funds through BRAC to construct or lease facilities to accommodate the DOD organizations that vacated the leased space, only to later expend additional appropriated funds to reoccupy some of the same leased space previously vacated. DOD did not concur with our second recommendation that the Assistant Secretary of Defense (Energy, Installations and Environment) modify the office’s Real Property Information Model to include a new data element to capture the total square footage assigned to each individual lease when multiple leases exist for a single building and make a corresponding change to its guidance to avoid overstating the total square footage assigned to each lease in RPAD. In its comments, DOD stated that it agrees that the issue we identified does exist regarding multiple leases that are assigned the same building (leases managed by WHS in the National Capital Region) and that the inclusion of an additional data element may well serve as an indicator to help resolve this issue. However, DOD believes that the underlying cause for overstating the total square footage for these records in RPAD is a data aggregation issue and has chosen an alternative approach to address the issue we raised. Specifically, DOD stated that the department is in the final stages of developing a platform that will transmit data into RPAD, and that will include the capability to capture square footage for multiple leases in a single building. DOD stated that its new Data Analytics and Integration Support platform for transmitting RPAD data will serve as the near real time data warehouse of the DOD real property inventory and will perform the data collection, verification, and validation of the real property inventory data submitted by each military department and WHS; and is expected to be fully deployed by fiscal year 2017. If implemented effectively, we believe DOD’s planned new approach for transmitting data into RPAD should meet the intent of our recommendation, which is to accurately capture the square footage assigned to each lease when multiple leases exist for a single building, thereby improving the accuracy and completeness of the data in RPAD. In the meantime, until DOD’s new interface is fully implemented, DOD will not have reasonable assurance that the total square footage for multiple leases in a single building is accurate versus being overstated, as is currently the case. DOD also did not concur with our third recommendation that the military departments look for opportunities to relocate DOD organizations in leased space onto installations that may have underutilized space. In its comments, DOD stated that its existing policy requires the effective and efficient use of DOD real property and that current initiatives undertaken by each of the military departments and WHS reflect adherence to this policy. DOD issued its new Real Property Efficiency Plan in October 2015 that highlights the department’s progress in this area. DOD further stated that—given that each of the military departments and WHS have implemented initiatives to reduce their dependence on leased space, especially where existing DOD assets may exist—an additional directive from the Secretary of Defense is not required. In our report, we note that DOD guidance directs the Secretaries of the military departments to maintain a program that monitors the use of real property to ensure that it is being used to the maximum extent possible consistent with both peacetime and mobilization requirements. While we understand that DOD sees no requirement for additional action, we found during the course of our review that DOD—existing guidance notwithstanding—had not yet assessed the likely effects of future force reductions on its use of leased space. Therefore, we believe this recommendation remains valid. In commenting on a draft of this report, DOD stated that the Army had issued new guidance to ensure optimal allocation of the best available facilities to support Army missions, citing a new execution order published by the Army during the time of our review. While this order was published in March 2015, it was not referenced by the Army or DOD until we received the comments at the end December 2015. Upon reviewing the order, which the Army provided, at our request, we learned that the Army intends to execute this order in two phases: Phase One, during which the Army will accurately document existing facility utilization and update its real property master plan; and Phase Two, during which Army will implement the updated plan by consolidating its footprint to the minimum appropriate space and dispose of, or identify for disposal, unneeded leases and facilities. According to Army officials we spoke with in January 2016, the Army extended the completion date for Phase One from the end of June 2015 to the end of August 2015 to allow time to fully account for some planned changes in Army size and force structure. These officials also told us that, while they have identified some leases for elimination, they would not have inventory data for this effort until the end of 2016. According to the execution order, Phase Two is to be completed by October 2021 and status reports will be submitted to Army headquarters annually depicting the progress installations have made in meeting facility footprint reduction timelines and goals. Although we have not had the opportunity to review the implementation of the execution order in detail because Army data has not been available, it does appear to us that, if the process laid out in the execution order is effectively and fully implemented, it may meet the intent of our recommendation. However, until the Army effort is completed—given that the Army holds the majority of DOD leases—we remain concerned that DOD may be at risk of missing out on opportunities to reduce its leased space at a DOD- wide level. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense (Energy, Installations and Environment); the Under Secretary of Defense (Intelligence); the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Director of Washington Headquarters Services (WHS); the Administrator, General Services Administration (GSA); the Director, Office of Management and Budget (OMB); and the Secretary of Homeland Security. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or LeporeB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has accurate and complete data on the number, size, and costs of its leases, we obtained and analyzed selected data elements from the Real Property Assets Database (RPAD) for fiscal years 2011 and 2013, as well as data from the military departments’ and Washington Headquarters Services’ (WHS) real property inventory systems. We assessed the reliability of DOD’s real property lease data by (1) interviewing agency officials knowledgeable about the data, (2) performing electronic testing for obvious errors in accuracy and completeness, (3) reviewing documentation for the various real property data systems covered in this review and taking steps to corroborate certain records and data elements against the source data provided by the military departments and WHS, and (4) selecting a statistical random sample of the most current data and analyzing it by comparing records in the sample to the data submitted by the military departments and WHS. We chose to analyze DOD’s leases in fiscal year 2011 because that was the final year of a 6-year period to implement the 2005 base closure and realignment (BRAC) recommendations for DOD activities occupying leased space and the lease records in fiscal year 2013 because those were the most recent data available at the time we initiated this review. We also obtained real property data from the General Services Administration (GSA) for the leased assets it manages on behalf of DOD for fiscal years 2011 and 2013. However, GSA’s real property management system does not retain historical information; as a result, GSA researched old files and compiled the information that was available in an attempt to satisfy our data requests. Due to a lack of historical information for fiscal years 2011 and 2013, there were a number of inconsistencies in the data provided. For example, lease numbers were not available for fiscal year 2011, and the lease start and expiration dates were not available for fiscal year 2013. The lack of available data for fiscal years 2011 and 2013 prevented us from conducting a year-to-year comparison of the GSA data and from producing any meaningful results about the number, size, and cost of the leased space DOD occupies through GSA occupancy agreements. We were able to use the data to help us determine the process DOD uses to track its leases and to make some comparisons of the GSA data with data contained in RPAD to determine, among other things, what type of data are collected for management purposes and whether duplicate records existed. The scope of this review included records for real property that DOD acquires from private organizations, GSA, and state organizations. While we obtained data on transactions in which a military service or defense agency acquired real property from another federal agency, military service, or defense agency—or from another organization within the same military service or defense agency—we excluded those records because they are typically permits, licenses, or use agreements rather than leases, with minimal costs, if any. We used standard statistical software to link the grant, asset, site, and disposal tables included in RPAD so that we could analyze the complete records for each DOD leased asset to determine whether these data were sufficiently reliable to report on the number, size, and cost of DOD leased assets. We performed three types of analyses to determine the accuracy and completeness of 12 specific data elements in RPAD that are used to provide identifying information on DOD’s leased assets, as well as the type, size, date, status, and cost of the leased assets. First, we performed electronic testing for obvious errors in accuracy and completeness of the 5,965 lease records for fiscal year 2011 and the 5,538 leased records for fiscal year 2013. Specifically, we conducted a series of tests to determine whether each data element contained data, as required, and whether the data satisfied certain business rules established by the managers of RPAD, such as the annual rent amount must be equal to or less than the annual rent plus other costs amount, and the value of these cost data elements must be greater than or equal to zero. Second, since the Army had the largest number of lease records in RPAD—4,695 (approximately 79 percent) of 5,965 records in fiscal year 2011 and 4,210 (approximately 76 percent) of the 5,538 records in fiscal year 2013—we compared the records representing the Army’s lease assets to the lease records maintained in the Army Rental Facilities Management Information system to determine whether the data for each of the data elements submitted by the Army matched the data in RPAD for those same 2 years. Third, based on a universe of 5,566 lease records from RPAD for fiscal year 2013 (the most current data available when we initiated our review), we took a random sample of 132 records and compared this statistical random sample of records to source data to examine the extent of data accuracy. Specifically, we compared the RPAD data for the specific data elements identified earlier to the data the military departments and WHS submitted from their real property databases to the Office of the Assistant Secretary of Defense (Energy, Installations and Environment), Business Enterprise Integration Directorate, to determine whether there were any discrepancies, errors, or omissions in RPAD. The results of our analysis are generalizable across all lease records for fiscal year 2013, with a 95 percent chance that the difference between the estimated and the true population percentage is within 10 percentage points. We also gathered and analyzed documentation, such as DOD directives and instructions and military-department regulations reflecting DOD’s and the military departments’ management of real property and how DOD uses the data in RPAD. We also interviewed officials from the following real property management offices and agencies: Office of the Deputy Assistant Secretary of the Army (Installations, Housing, and Partnerships); Office of the Assistant Secretary of Defense (Energy, Installations and Environment), Business Enterprise Integration Directorate; WHS (Facilities Services Directorate), Space Portfolio Management Division; Department of the Army, Chief of Staff for Installation Management (Operations Directorate), Operations Division; U.S. Army Corps of Engineers (Real Estate); Department of the Navy, Naval Facilities Engineering Command; Department of the Air Force, Director of Civil Engineers (Installation Operations Branch); and GSA to obtain information about the management of their real property management systems. Based on the results of our analysis, we determined that data from RPAD were neither accurate nor complete, and, as such, the data were not sufficiently reliable for our use to determine the number of leases and the size and cost of all of DOD’s leased assets for fiscal years 2011 and 2013. To determine the extent to which DOD has taken actions to reduce its reliance on leased space since 2011, we obtained and reviewed the 2005 BRAC Commission report to identify recommendations for realigning and closing some DOD leased facilities that had to be implemented by September 15, 2011. We also reviewed DOD’s 2013 Freeze the Footprint report, submitted to the Office of Management and Budget (OMB) to identify DOD’s planned initiatives to reduce its domestic office and warehouse space (including both leased and owned space). We also interviewed DOD and Army real property officials to discuss their planned initiatives for leased space in order to meet the Freeze the Footprint requirements. Specifically, we obtained documentation and interviewed officials from the Office of the Assistant Secretary of Defense (Energy, Installations and Environment), Business Enterprise Integration Directorate; Department of the Army, Chief of Staff for Installation Management (Operations Directorate), Operations Division; and U.S. Army Corps of Engineers (Real Estate). We focused our work for this objective on the Army because in its role as executive agent for joint service programs and some defense agencies, as well as its own mission needs, it occupies and manages the majority of the leases in DOD’s real property inventory reporting system. We also gathered documentation and interviewed officials within the WHS (Facilities Services Directorate), Space Portfolio Management Division to obtain examples of DOD reoccupying leased space previously vacated in the National Capital Region as a result of the 2005 BRAC recommendations. The National Capital Region was the primary focus of the 2005 BRAC recommendations that involved moving DOD activities from leased space to government-owned space. We obtained DOD reports on the number and location of its leases and interviewed officials who maintain the related lease data. We also reviewed our December 2013 report that identifies DOD installations that may have available administrative office space based on the inactivations of 10 Army Brigade Combat Teams that are expected to begin in fiscal year 2017. We then analyzed some of the lease data from RPAD for fiscal year 2013 to determine whether any opportunities exist for DOD to reduce its leased space in geographic locations that are in close proximity to the DOD installations that may have unutilized or underutilized facilities based on these planned force structure reductions. We chose fiscal year 2013 data because those were the most recent data available at the time we initiated this review. We did not use DOD’s current utilization of facilities data because in our September 2014 report we reported that utilization data continued to be incomplete and inaccurate although utilization data had improved since we previously reported on them in 2011. To determine the extent to which DOD has oversight of the status of security assessments of leased facilities obtained through GSA, we collected information regarding facility security assessments for DOD- leased space for fiscal years 2011 and 2013. We selected these years to match our review of DOD lease records. We reviewed and analyzed the Federal Protective Service tracking schedule for the facility security assessments it performs for DOD’s leased facilities and found the information contained numerous data issues, such as assessments scheduled and conducted outside of required time frames and missing assessment dates. We examined the facility assessment schedules for 500 leased facilities for fiscal year 2011 and 484 leased facilities for fiscal year 2013. Many of these facilities had multiple leases or occupancy agreements. The data we reviewed included 1,043 lease numbers or occupancy agreements for fiscal year 2011 and 1,051 lease numbers or occupancy agreements for fiscal year 2013. We held several meetings with Federal Protective Service officials and gathered follow-up documentation regarding inconsistencies and missing facility security assessment data. We also reviewed prior GAO reports on this issue. We reviewed DOD directives and instructions, as well as other related documentation, such as the Interagency Security Committee standards, to determine which DOD organization has oversight responsibility for facility security assessments and physical security, as well as the scope of this responsibility. We interviewed officials from the Office of the Assistant Secretary of Defense (Energy, Installations and Environment), Facility Investment Management; Office of the Assistant Secretary of Defense (Energy, Installations and Environment), Business Enterprise Integration Directorate; Office of the Under Secretary of Defense (Intelligence); U.S. Army Corps of Engineers (Real Estate); U.S. Army Corps of Engineers (Operational Protection Division), Directorate of Contingency Operations; and the Pentagon Force Protection Agency to determine whether DOD receives information on the status of facility security assessments. In addition, we interviewed officials from WHS (Facilities Services Directorate), Space Portfolio Management Division; Office of the Deputy Assistant Secretary of the Army (Installations, Housing and Partnerships); Department of the Army, Assistant Chief of Staff for Installation Management (Operations Directorate), Operations Division; Department of the Navy, Naval Facilities Engineering Command; Department of the Air Force, Director of Civil Engineers (Installation Operations Branch); Federal Protective Service; and the Pentagon Force Protection Agency to obtain general information on the status of their facilities meeting security requirements. We examined the reliability of the facility security assessments data obtained from the Federal Protective Service by determining whether (1) facility security levels had been determined for each facility and (2) facility security assessments had been completed or planned within the required periods. Because of the incomplete and inaccurate data, we determined that the Federal Protective Service facility security assessment tracking data were not sufficiently reliable to determine whether the facility security assessments had been completed as required. Additionally, in August 2012, we reported that Federal Protective Service’s facility security assessments data for fiscal year 2011 contained a number of missing and incorrect values that made the data unreliable to determine the extent of their backlog of assessments that needed to be completed. We also reviewed the Pentagon Force Protection Agency’s tracking schedule for the facility security assessments it performs for the DOD leased facilities for which it is responsible to determine whether required assessments had been completed. We found that the Pentagon Force Protection Agency’s data were sufficiently reliable for our purposes. We conducted this performance audit from July 2013 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Harold Reich, Assistant Director (retired); Maria Storts, Assistant Director; Ronald Bergman; Virginia Chanley; Tammy Conquest; Linda Keefer; Joanne Landesman; Jacqueline McColl; Dae Park; David Sausville; James Ungvarsky (retired); and Michael Willems made key contributions to this report. Underutilized Facilities: DOD and GSA Information Sharing May Enhance Opportunities to Use Space at Military Installations. GAO-15-346. Washington, D.C.: June 18, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Defense Infrastructure: DOD Needs to Improve Its Efforts to Identify Unutilized and Underutilized Facilities. GAO-14-538. Washington, D.C.: September 8, 2014. Federal Protective Service: Protecting Federal Facilities Remains a Challenge. GAO-14-623T. Washington, D.C.: May 21, 2014. Federal Facility Security: Additional Actions Needed to Help Agencies Comply with Risk Assessment Methodology Standards. GAO-14-86. Washington, D.C.: March 5, 2014. Homeland Security: Federal Protective Service Continues to Face Challenges with Contract Guards and Risk Assessments at Federal Facilities. GAO-14-235T. Washington, D.C.: December 17, 2013. Defense Infrastructure: Army Brigade Combat Team Inactivations Informed by Analyses, but Actions Needed to Improve Stationing Process. GAO-14-76. Washington, D.C.: December 11, 2013. Homeland Security: Challenges Associated with Federal Protective Service’s Contract Guards and Risk Assessments at Federal Facilities. GAO-14-128T. Washington, D.C.: October 30, 2013. Federal Real Property: Greater Transparency and Strategic Focus Needed for High-Value GSA Leases. GAO-13-744. Washington, D.C.: September 19, 2013. Federal Protective Service: Challenges with Oversight of Contract Guard Program Still Exist, and Additional Management Controls Are Needed. GAO-13-694. Washington, D.C.: September 17, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. Facility Security: Greater Outreach by DHS on Standards and Management Practices Could Benefit Federal Agencies. GAO-13-222. Washington, D.C.: January 24, 2013. Federal Protective Service: Actions Needed to Assess Risk and Better Manage Contract Guards at Federal Facilities. GAO-12-739. Washington, D.C.: August 10, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Federal Real Property: Overreliance on Leasing Contributed to High-Risk Designation. GAO-11-879T. Washington, D.C.: August 4, 2011. Federal Protective Service: Actions Needed to Resolve Delays and Inadequate Oversight Issues with FPS’s Risk Assessment and Management Program. GAO-11-705R. Washington, D.C.: July 15, 2011. Federal Protective Service: Progress Made but Improved Schedule and Cost Estimate Needed to Complete Transition. GAO-11-554. Washington, D.C.: July 15, 2011. Homeland Security: Protecting Federal Facilities Remains a Challenge for the Department of Homeland Security’s Federal Protective Service. GAO-11-813T. Washington, D.C.: July 13, 2011. Federal Facility Security: Staffing Approaches Used by Selected Agencies. GAO-11-601. Washington, D.C.: June 30, 2011. Federal Real Property: Progress Made on Planning and Data, but Unneeded Owned and Leased Facilities Remain. GAO-11-520T. Washington, D.C.: April 6, 2011. Building Security: New Federal Standards Hold Promise, But Could Be Strengthened to Better Protect Leased Space. GAO-10-873. Washington, D.C.: September 22, 2010.
Overreliance on costly leasing is one of the major reasons that federal real property management remains on GAO's high-risk list. GAO's prior work has shown that owning buildings often costs less than operating leases, especially where there are long-term needs for space. House Report 113-102 included a provision that GAO review DOD's management of leased space. For fiscal years 2011 and 2013, this report evaluates the extent to which DOD (1) has accurate and complete data on the number, size, and costs of its leases; (2) has taken actions to reduce its reliance on leased space; and (3) has oversight of the status of security assessments conducted for leased facilities contracted through GSA. GAO analyzed lease data from the real property systems kept by DOD, the military departments, WHS, and GSA, and facility security assessment data from FPS and the Pentagon Force Protection Agency; reviewed guidance; and interviewed cognizant officials. While the Department of Defense (DOD) is taking some steps to address data issues, it cannot fully determine the number, size, and costs of its leases for real property because its Real Property Assets Database (RPAD), the real property inventory system that DOD uses to report on its leased assets, contains some inaccurate and incomplete data. GAO found that about 15 percent of the RPAD lease records for fiscal year 2011 and 10 percent of the records for fiscal year 2013 were inaccurate. Most of these errors were in the lease records for the Army (the manager of about 80 percent of the leased assets records in RPAD); however, the Army is aware of these issues and is taking steps to correct future data. GAO also found that RPAD did not include about 5 percent of the Army's lease records for fiscal years 2011 and 2013. GAO conducted a random sample of the fiscal year 2013 RPAD data and found that the data element required to calculate costs was unreliable for 11 of the 84 Army sample records. GAO found that the Army was not following DOD's guidance for reporting costs on leases that have multiple assets associated with them. Furthermore, GAO found that RPAD does not contain a data element for the square footage for leases in which there are multiple tenants occupying space in the same building, as is the case for some Washington Headquarters Services (WHS) leases. DOD is implementing a presidential memorandum and a series of Office of Management and Budget memorandums to maintain or reduce owned and leased space, but has projected minimal change to its leasing activities. There have been opportunities in the past to reduce its leased space; however, DOD reoccupied over 1.1 million square feet in leased space previously vacated when it implemented the 2005 Base Closure and Realignment recommendations. In some cases, DOD tenants occupy leased space close to large installations that may have had unused facilities. Potential force structure reductions may offer an opportunity to further reduce DOD's reliance on leased space in the future, if DOD actively identifies suitable underutilized facilities on its installations. DOD does not have complete oversight of the security assessments conducted for its leased facilities acquired through the General Services Administration (GSA). Facility security assessments, which are required to be conducted every 3 to 5 years, are conducted by the Pentagon Force Protection Agency and the Federal Protective Service (FPS) using established standards. The Pentagon Force Protection Agency had completed the required assessments for the facilities for which it is responsible between August 8, 2013, and January 31, 2014. However, DOD has not requested information on whether FPS, the primary agency for protecting federal facilities, has completed its facility security assessments as required for all DOD-leased locations. GAO analyzed the FPS assessment data for fiscal years 2011 and 2013 and identified several issues: (1) some assessments were not scheduled within required time frames, (2) data on previously recorded assessment dates were overwritten when updated, and (3) dates for completed and next-scheduled assessments were not always recorded. While FPS is not required to inform DOD about assessment schedules, without periodically requesting information on whether facility security assessments have been conducted, DOD does not have the information it needs to ensure that its leased facilities are secure. GAO recommends four actions to improve DOD's management of its leased facilities. DOD concurred with GAO recommendations to (1) enforce its guidance to provide annual rent plus other costs for each asset on the same lease, and (2) request information from FPS on facility security assessments. DOD did not concur with GAO recommendations to capture total square footage, by lease, or to look for opportunities to move DOD organizations in leased space onto installations. As discussed in the report, GAO believes that these recommendations remain valid.
The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. Because of the concern about attacks from individuals and groups, protecting the computer systems that support critical operations and infrastructures has never been more important. These concerns are well founded for a number of reasons, such as escalating threats of computer security incidents, the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. According to experts from government and industry, during the first quarter of 2005, more than 600 new Internet security vulnerabilities were discovered, thereby placing organizations that use the Internet at risk. Computer-supported federal operations are likewise at risk. IBM recently reported that there were over 54 million attacks against government computers from January 2005 to June 2005. Without proper safeguards, there is risk that individuals and groups with malicious intent may intrude into inadequately protected systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. How well federal agencies are addressing these risks is a topic of increasing interest in both Congress and the executive branch. This is evidenced by recent hearings on information security intended to strengthen information security. DLA is an agency of the Department of Defense (DOD). As DOD’s supply chain manager, DLA provides food, fuel, medical supplies, clothing, spare parts for weapon systems, and construction materials to sustain DOD military operations and combat readiness. To fulfill its mission, DLA relies extensively on interconnected computer systems to perform various functions, such as managing about 5.2 million supply items and processing about 54,000 requisition actions per day for goods and services. DLA employs about 22,575 civilian and military workers, located at about 500 field locations in 48 states and 28 countries. In accordance with DOD policy, DLA has developed an agencywide information security program to provide information security for its operations and assets. The DLA Director is responsible for ensuring the security of the information and information systems that support the agency’s operations. In carrying out this responsibility, the Director has delegated to DLA’s chief information officer the authority to ensure that the agency complies with FISMA and with other information security requirements. DLA’s chief information officer has also designated a senior agency official to serve as Director of Information Assurance—the agency’s senior information security officer—and to head the central security management group, commonly referred to as the information assurance program office. This group carries out specific responsibilities, including the following: documenting and maintaining an agencywide security framework to assess the agency’s security posture, identify vulnerabilities, and allocate resources; establishing and managing security awareness and specialized professional security training for employees who have significant security responsibilities; ensuring that all systems are certified and accredited in accordance with both federal and DOD processes; providing personnel at headquarters and the DLA locations with guidance on, and assistance in preparing, system security authorization agreements—single source data packages for all information pertaining to the certification and accreditation of a system in order to, among other things, guide actions, document decisions, specify information security requirements, and maintain operational systems security; and ensuring that field site personnel accurately assess their locations’ security postures. Information assurance managers at the various DLA locations directly report to the information technology chief at their location and are expected to assist the Director of Information Assurance by coordinating security activities, establishing and maintaining a repository for documenting and reporting system certification and accreditation activities, maintaining and updating system security authorization agreements, and notifying the designated approving authority of any changes that could affect system security. Information assurance officers at the various DLA locations assist the information assurance managers through the following activities: ensuring that appropriate information security controls are implemented for an information system, notifying the information assurance manager when system changes that might affect certification and accreditation are requested or planned, and conducting annual validation testing of systems. Figure 1 below shows a simplified overview of DLA’s information assurance management and reporting structure. Congress enacted FISMA to strengthen the security of information and information systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to protect the information and information systems that support the operations and assets of the agency—including those that are provided or managed by another agency, a contractor, or some other source. The program must include the following: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, modification, disruption, or destruction of information or information systems; training of personnel who have significant responsibility for information security and security awareness training to educate personnel— including contractors and other users of the agency’s information systems—about information security risks and their responsibilities to comply with the agency’s security policies and procedures; periodic testing and evaluation of the effectiveness of the agency’s information security policies, procedures, and practices; and a process for planning, implementing, evaluating, and documenting plans of action and milestones that are taken to address any deficiencies in the agency’s information security policies, procedures, and practices. To support agencies in conducting their information security programs, the National Institute of Standards and Technology (NIST) is publishing mandatory standards and guidelines for providing information security all agency operations, assets, and information systems other than national security systems. The standards and guidelines include, at a minimum, (1) standards to be used by all agencies to categorize their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels, (2) guidelines recommending the types of information and information systems that are to be included in each category, and (3) minimum information security requirements for information and information systems in each category. In addition, DOD has developed and published various directives and instructions that comprise an information assurance policy framework that is intended to meet the information security requirements specified in FISMA and NIST standards and publications. This framework applies to all of DOD’s systems—both national and non-national security systems— including those operated by or on behalf of DLA. DLA’s policies and procedures for implementing its agency information security program are contained in DLA’s One Book policy and agency handbook. DLA has implemented important elements of an information security program—including establishing a central security management group, appointing a senior information security officer to manage the program, and providing security awareness training for its employees. However, DLA has not yet fully implemented other essential elements of an effective information security program to protect the confidentiality, integrity, and availability of its information and information systems that support its mission. Collectively, these weaknesses place DLA’s information and information systems at risk. Key underlying reasons for the weaknesses pertain to DLA’s management and oversight of its security program. In carrying out their information security responsibilities, both the Chief Information Officer and the Director of Information Assurance have taken several steps to implement important elements of DLA’s security program, including the following: ensuring employees and contractors receive information security developing information security procedures and guidance for use in implementing the requirements of the program; deploying information system security engineers to assist headquarters and field staff in implementing security policies and procedures consistently across the agency; developing an agencywide management tool—known as the Comprehensive Information Assurance Knowledgebase—to centrally manage and report on key performance measures, such as the status of security training, plans of action and milestones, and certification and accreditation activities; and developing and implementing various automated information technology initiatives to assist information assurance managers and information assurance officers in improving DLA’s security posture. Weaknesses in information security practices and controls place DLA’s information and information systems at risk. Our analysis of information security activities for selected systems at 10 DLA locations showed that the agency had not fully or consistently implemented important elements of its program. Specifically, risks that could result from the unauthorized access, use, disclosure, or destruction of information or information systems were not consistently assessed; employees who had significant information security responsibilities did not receive sufficient training, and security training plans were sometimes not adequately completed; testing and evaluation of the effectiveness of management and operational security controls were not adequately performed; and plans of action and milestones for mitigating known information security deficiencies were not sufficiently completed. Table 1 indicates with an “X” weaknesses in the implementation of key information security practices and controls for selected systems. FISMA requires that agencies’ information security programs include periodic assessments of the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the agency. Identifying and assessing information security risks are essential steps in order to determine what controls are required and what level of resources should be expended on these controls. NIST has developed guidance to help organizations protect their information and information systems by using security controls that are selected through a risk-based process. DOD established a set of baseline security controls for each of three mission assurance categories that determine what security controls should be implemented. These controls are adjusted based on an assessment of risk including specific threat information, vulnerabilities, and countermeasures relative to the system. Vulnerabilities that are not mitigated are referred to as residual risk. The designated approving authority considers the residual risks in determining whether to accredit a system. Such risk assessments, as part of the requirement to reaccredit systems, are to be performed prior to a significant change in processing, but at least every 3 years. Although DLA categorized its systems in accordance with DOD guidance, we found that it did not consistently assess the residual risk for 9 of the 10 systems we selected for review. For example: nine did not use the established baseline security controls to assess the three did not clearly identify the threats, vulnerabilities, and two did not state how the threats and vulnerabilities would affect the mission that the system supports; one only referenced the security controls as the threat or vulnerability; one had not been updated since 2001. Unless DLA performs risk assessments consistently and assesses them against the appropriate set of controls, it will not have assurance that it has implemented appropriate controls that cost-effectively reduce risk to an acceptable level. FISMA mandates that all federal employees and contractors who are involved in the use of agency information systems be provided training in information security awareness and that agency heads ensure that employees with significant information security responsibilities are provided sufficient training with respect to such responsibilities. An effective information security program should promote awareness and provide training so that employees who use computer resources in their day-to-day operations understand security risks and their roles in implementing related policies and controls to mitigate those risks. DOD guidance requires that individuals receive the necessary training to ensure that they are capable of conducting their security duties and that each component establish and implement information assurance training and professional certification programs. DOD also requires that security awareness and training plans be documented for each system as part of the certification and accreditation process. These security training plans specify that training for individuals associated with a system’s operation be appropriate to an individual’s level and area of responsibility. This training should provide information about the security policy governing the information being processed, as well as potential threats and the nature of the appropriate countermeasures. DLA provided annual security awareness training for employees and contractors for whom it was appropriate. However, employees with significant information security responsibilities did not receive sufficient training. For example, of the 17 information assurance managers and information assurance officers located where we reviewed selected systems: eleven reported having received some form of training, although eight of them had received training on only one of their security responsibilities—developing security documentation; six reported never having received any security training; and two reported having received no security training for 2 or more years. Further, security training and awareness plans for 3 of the 10 systems we reviewed were either not system-specific or lacked detailed information. For example, training plans for 2 systems did not specify, for each level and area of responsibility, the system operations appropriate for a given user. The third lacked detailed information about training objectives, goals, and requirements. A key reason for these weaknesses is that the individual responsible for monitoring the agency’s security training program had other significant responsibilities and was not able to effectively ensure that employees received the required training. As a result, DLA does not have assurance that employees with significant security responsibilities are equipped with the knowledge and skills they need to understand information security risks and their roles and responsibilities in implementing related policies and controls to mitigate those risks. Another key element that FISMA requires of an information security program is periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency based on risk, but not less than annually. FISMA requires that such testing and evaluation activities shall include the management, operational, and technical controls of every system identified in an agency’s information systems inventory. DOD policy requires periodic reviews of operational systems at predefined intervals. Such reviews include testing and evaluating the technical implementation of the security design of a system and ascertaining that security software, hardware, and firmware features affecting the confidentiality, integrity, availability, and accountability of information and information systems have been implemented and documented. The results of testing and evaluation of security controls are to be used in the decision- making process for authorizing systems to operate. Further, DLA’s One Book policy requires information assurance managers and information assurance officers to use the security test and evaluations as the method for validating the adequacy of management, operational, and technical controls, at least annually. DLA did not annually test and evaluate the management and operational security controls of its systems. According to DLA officials, vulnerability scans and information assurance program reviews collectively satisfied the annual requirement for testing and evaluating management, operational, and technical controls. However, the combination of the vulnerability scans and the program reviews did not satisfy the annual requirement. Although DLA generally assessed technical controls by conducting annual vulnerability scans on its systems, it did not annually assess the management and operational controls for each of its systems. While the program reviews are intended to satisfy the requirement for testing and evaluating the management and operational controls, DLA does not conduct these reviews annually on every system. For example, less than half of DLA’s locations and systems have undergone program reviews in the last 3 years, as shown in table 2. Until DLA tests and evaluates management and operational controls annually, critical systems may contain vulnerabilities that have not been identified or appropriately considered in decisions to authorize systems to operate. Moreover, DLA may not be able to ensure the confidentiality, integrity, and availability of the sensitive data that its systems process, store, and transmit. FISMA requires each agency to develop a process for planning, implementing, evaluating, and documenting remedial action plans to address any deficiencies in its information security policies, procedures, and practices. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant deficiencies. The Office of Management and Budget (OMB) requires agency chief information officers to document and report all agency information assurance weaknesses and remedial actions in plans of action and milestones. The plans should list each security weakness and the tasks, resources, milestones, and scheduled completion dates for remedying each weakness. The plans of action and milestones associated with the 10 systems we selected for review were incomplete. For example: none of the plans clearly documented and reported the nature of the seven did not identify the start or completion dates for addressing the none specified the resources necessary to complete the action plan; nine did not list the risk associated with the security weakness; six were not based on the correct set of baseline security controls; and one plan contained steps to identify vulnerabilities rather than the steps required to remedy vulnerabilities. A key reason for these weaknesses is that information assurance managers and information assurance officers reported that they did not understand the requirements for reporting system security vulnerabilities because DLA had not provided specific criteria or instructions on what—or how—to document and report plans of action and milestones for system deficiencies. Having reliable plans of action and milestones is not only vital to ensuring that DLA’s information and information systems receive adequate protection, but it is also important for accurately managing and reporting progress on them. Without reliable plans, DLA does not have assurance that all information security weaknesses have been reported and that corrective actions will be taken to appropriately address the weaknesses. OMB requires that agencies establish a certification and accreditation process for formally authorizing systems to operate. Certification and accreditation is the requirement that agency management officials formally authorize their information systems to process information, thereby accepting the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operational, and technical controls established in an information system’s security plan. The accreditation decision results in (1) a full authorization to operate, (2) an interim authorization to operate, or (3) no authorization to operate. DOD instructions and DLA’s agency handbook provides guidance on the certification and accreditation process. According to DLA officials, the agency has implemented the practice of issuing authorization to operate decisions on a “time-limited” basis— regardless if certification tasks have been completed because of concern that OMB might not support funding for systems that received an interim authorization to operate decision. However, OMB, DOD, and DLA policies and procedures do not allow for the practice of issuing “time-limited” authorizations; they require interim authorization to operate decisions when all certification tasks have not been completed. To illustrate, the designated approving authority for one of the ten systems we reviewed changed the system’s status from an interim authorization to operate to a “time-limited” authorization to operate even though several action items for such authorization had not been met, and this type of authorization is not allowed under current guidance. For example, information assurance personnel had not updated the security plan or completed a risk assessment. Unless DLA complies with the requirements for issuing accreditation decisions, it will not have assurance that its information systems are operating as intended and meeting security requirements. In addition, DLA did not effectively implement controls to verify the completion of certification tasks. As designed and implemented, DLA divides the responsibilities of the system certifier among the information assurance personnel at its locations and a central review team within the information assurance program office. To help ensure quality over the certification process, the central review team established a DLA quality review checklist to verify the certification tasks performed by the information assurance personnel. However, under the current process, the central review team did not interview information assurance personnel at the locations or conduct on-site visits to verify that certification tasks were performed. Instead, the central review team relies on documentation submitted to them by the information assurance personnel who performed the certification tasks. However, this documentation was not always adequate. For example, the checklist contained questions about whether physical access controls were adequate to protect all facilities housing user workstations, but for the central review team to verify such a task, either an on-site inspection or a diagram of the facility or other documentation to demonstrate the physical access controls in place would have been needed. As a result, the certification process may not provide the authorizing official with objective or sufficient information that is necessary to make credible, risk-based decisions on whether to place an information system into operation. Key underlying reasons for the weaknesses in DLA’s information security program were that the responsibilities of information assurance managers and information assurance officers were not consistently understood or communicated across the 10 DLA locations we reviewed and the information assurance program office did not maintain the accuracy and completeness of the data contained in the agency’s primary reporting tool for managing and overseeing the agencywide information security program. The information assurance program office—as the agency’s central security management group for managing and overseeing the security program—is responsible for providing overall security policy and guidance, along with oversight to ensure information assurance managers and information assurance officers adequately perform or execute required information security activities such as those related to performing risk assessments, satisfying security training requirements, testing and evaluating the effectiveness of controls, documenting and reporting plans of action and milestones, and certifying and accrediting systems. Although the information assurance program office developed information security policies and procedures, it did not maintain them to ensure information assurance personnel had current and sufficient documentation to carry out their responsibilities. For example, of the 17 information assurance managers and information assurance officers at the 10 locations we reviewed: nine were unaware of the requirement for security training specific to an employee’s information security responsibilities; and three were unaware of the requirement to perform annual self assessments, while ten others had varying understandings of how this requirement was to be met. In addition, data on key information security activities contained in the primary reporting tool were inaccurate or incomplete. For example, for a year, the information assurance program office had not entered weaknesses that had been identified during information assurance program reviews into the primary reporting tool; information assurance personnel at DLA locations used personal discretion for determining whether or not to report a system deficiency to the information assurance program office for entry and compilation in the primary reporting tool, thereby potentially underreporting agency level plans of action and milestones; and information assurance personnel at both headquarters and the DLA locations did not consistently enter key performance metrics related to plans of action and milestones and security training, thereby potentially underreporting important information used to gauge the health of the security program. A key reason for these weaknesses was that DLA had no documentation on the system design or its intended use and, therefore, had no instructional material to guide users. As a result, the data in the primary reporting tool were not reliable or effective for reporting metrics to DOD and OMB for FISMA evaluation reporting. Moreover, because the key information had not been entered into the database, the agency did not readily have all the information about the deficiencies of its program and, therefore, did not have complete information about the security posture of its program. DLA senior officials recognize that the agency’s primary reporting tool has not been effectively implemented and used to manage and oversee the security program. Therefore, the agency developed an ad hoc process of data calls to the DLA locations to aggregate the performance data. However, continuation of this ad hoc process will likely not provide the reliable data needed to consistently satisfy FISMA reporting requirements. Until agencywide policies and procedures are sufficiently documented and implemented and are consistently understood and used across the agency, DLA’s ability to protect the information and information systems that support its mission will be limited. DLA has not fully implemented its agencywide information security program, thereby jeopardizing the confidentiality, integrity, and availability of the information and information systems that it relies on to accomplish its mission. Specifically, DLA has not consistently implemented important information security practices and controls, including consistently assessing risk; ensuring that training is provided for employees who have significant responsibilities for information security, and that security training plans are updated and maintained; annually testing and evaluating the effectiveness of management, operational and technical controls; documenting and reporting complete plans of action and milestones; implementing a fully effective certification and accreditation process; and maintaining the accuracy and completeness of the data contained in the primary reporting tool. Although DLA’s efforts in developing and implementing its information security program have merit, it has not taken all the necessary steps to ensure the security of the information and information systems that support its operations. Ensuring that the agency implements key information security practices and controls requires top management support and leadership and consistent and effective management oversight and monitoring. Until DLA takes steps to address these weaknesses and fully implements its information security program, it will have limited assurance that agency operations and assets are adequately protected. To assist DLA in fully implementing its information security program, we are making recommendations to the Secretary of Defense to direct the DLA Director to implement key information security practices and controls by: consistently assessing risks that could result from the unauthorized access, use, disclosure or destruction of information and information; ensuring that training is provided for employees who have significant responsibilities for information security; ensuring that security training plans are updated and maintained; ensuring appropriate monitoring of the agency’s security training ensuring that annual security test and evaluation activities include management, operational, and technical controls of every information system in DLA’s inventory; documenting and reporting complete plans of action and milestones; establishing specific guidance or instructions to information assurance managers and information assurance officers on what—or how—to document and report plans of action and milestones for system deficiencies; discontinuing the practice of issuing “time-limited” authorization to operate accreditation decisions when certification tasks have not been completed; ensuring that the DLA central review team verifies that certification tasks have been completed; and maintaining the accuracy and completeness of the data contained in the agency’s primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. In providing written comments on a draft of this report (reprinted in app. II), the Deputy Under Secretary of Defense (Business Transformation) concurred with most of our recommendations and described ongoing and planned efforts to address them. Specifically, he stated that DLA has taken several actions to fully implement an effective agencywide information security program, including publishing a DOD manual that will soon be released to provide detailed guidance on training for employees who have significant information security responsibility. He also stated that DLA is issuing an interim mandatory guide that will soon be released to assist users in documenting and preparing plans of action and milestones, and reinforcing policy requirements for making accreditation decisions. The Deputy Under Secretary of Defense disagreed with our draft recommendation to ensure the testing and evaluation of the effectiveness of security controls for all systems annually. He stated that this recommendation would require all information assurance controls for all systems be tested and evaluated every year, which essentially amounts to annual recertification. The department further stated that the level of test and evaluation is neither practical nor cost-effective and that the combination of DLA’s assessments, tests, and reviews allow them to ensure compliance of their controls in accordance with DOD Instruction 8500.2. The intent of our draft recommendation was not to require that all information assurance controls for all systems be tested and evaluated annually. Rather, the intent of our draft recommendation, consistent with FISMA requirements, was to ensure that DLA’s annual security test and evaluation activities include management, operational, and technical controls of every information system in its inventory. As stated in our report, while DLA generally assessed technical controls annually of every system in its inventory, it did not annually test and evaluate management and operational controls of those systems. We agree that testing and evaluating all controls for every system annually may not be cost-effective. However, unless DLA’s annual testing and evaluation activities include management and operational controls, as well as the technical controls of its systems, it may not be able to ensure the confidentiality, integrity, and availability of its information and information systems. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to ensure that annual security test and evaluation activities include management, operational, and technical controls of every information system in DLA’s inventory. The Deputy Under Secretary of Defense also disagreed with our draft recommendation to document procedures for performing certification responsibilities that include specific responsibilities related to using the checklist. He stated that the Secretary of Defense provided sufficient direction to agency directors on the certification and accreditation process through DOD Instruction 5200.40, and that additional guidelines on the certification and accreditation process are provided in DOD 8510.1-M. He further stated that DOD 8510.1-M contains a “minimum activities checklist” that all DOD Components are expected to follow when conducting certifications and that DLA’s information assurance One Book policy includes roles and responsibilities for performing security certification and accreditation. Our draft recommendation refers to the DLA quality review checklist used by the agency’s central review team to verify completion of certification tasks, not to the DOD “minimum activities checklist” described in DOD 8510.1-M. Unless certification tasks performed by information assurance personnel at the various DLA locations have been verified, the authorizing official may not have objective or sufficient information that is necessary to make credible, risk-based decisions on whether to place an information system into operation. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to ensure that the DLA central review team verifies that certification tasks have been completed. The Deputy Under Secretary of Defense also disagreed with our draft recommendation to update and maintain the agency’s primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. He stated that the primary reporting tool was developed and maintained by DLA and that responsibility for updating and sustaining the tool was transferred to an internal application development team for continued maintenance and support. He also stated that DLA initiated implementation of enterprise standard DOD solutions that will replace the functionality currently provided by the agency reporting tool and that sustainment of the tool would not be cost effective or efficient. The intent of our draft recommendation was to update and maintain the accuracy and completeness of data entered into DLA’s primary reporting tool, not the application programs. While DLA has several initiatives underway at various stages of development and implementation that are intended to introduce new functionality or replace some of the existing functionality in the agency reporting tool, none of these initiatives have been fully implemented throughout the agency. If DLA continues to use a tool for managing and overseeing its information assurance program, the fundamental practice of having accurate and complete data—whether in the current tool or in a future tool—is important to ensure the data are reliable for reporting performance metrics on key information security practices and controls to DOD and OMB for FISMA evaluation reporting. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to maintain the accuracy and completeness of the data contained in the agency’s primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. We are sending copies of this report to the Deputy Under Secretary of Defense (Business Transformation); Assistant Secretary of Defense, Networks and Information Integration; DLA Director; officials within DLA’s Information Operations and Information Assurance office; and the Acting DOD Inspector General. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine whether the Defense Logistics Agency (DLA) had implemented an effective agencywide information security program, we reviewed the Department of Defense (DOD) and agencywide information security policies, directives, instructions, and handbooks. We also evaluated DLA’s agencywide tool—the Comprehensive Information Assurance Knowledgebase—for aggregating the agency’s performance data on information security activities that are required by the Federal Information Security Management Act of 2002 (FISMA), such as the number and percentage of risk assessments performed, employees with significant information security responsibilities that received training to perform their duties, and weaknesses for which the agency had plans of action and milestones. To gain insight into DLA’s certification and accreditation process, we reviewed the agency’s methods and practices for identifying vulnerabilities and risks and the process for certifying systems and making accreditation decisions. We assessed whether DLA’s information security program was consistent with relevant DOD policies and procedures, as well as with the requirements of FISMA, applicable Office of Management and Budget (OMB) policies, and National Institute of Standards and Technology (NIST) guidance. We also assessed whether selected information security plans and documents related to risk assessments, testing and evaluation, and plans of action and milestones were current and complete. To accomplish this, we non-randomly selected 10 sensitive but unclassified systems. The 10 systems came from 10 different DLA locations and included 3 systems, 4 sites, and 3 types. We selected these systems to maximize variety in criticality and geographic locations. We also conducted telephone interviews with 17 information assurance managers and information assurance officers from the 10 locations in order to gain insight into their understanding of FISMA requirements, relevant OMB policies, NIST guidance, and agencywide and DOD policies and procedures. We performed our review at DLA Headquarters, located at Ft. Belvoir, Virginia; DLA Supply Center, located at Columbus, Ohio; and DLA’s Business Processing Center, located at Denver, Colorado, from September 2004 to July 2005, in accordance with generally accepted government auditing standards. In addition to the individual named above, Jenniffer Wilson, Assistant Director, Barbara Collier, Joanne Fiorino, Sharon Kittrell, Frank Maguire, John Ortiz, and Chuck Roney made key contributions to this report.
The Defense Logistics Agency's (DLA) mission is, in part, to provide food, fuel, medical supplies, clothing, spare parts for weapon systems, and construction materials to sustain military operations and combat readiness. To protect the information and information systems that support its mission, it is critical that DLA implement an effective information security program. GAO was asked to review the efficiency and effectiveness of DLA's operations, including its information security program. In response, GAO determined whether the agency had implemented an effective information security program. Although DLA has made progress in implementing important elements of its information security program, including establishing a central security management group and appointing a senior information security officer to manage the program, it has not yet fully implemented other essential elements. For example, the agency did not consistently assess risks for its information systems; sufficiently train employees who have significant information security responsibilities or adequately complete training plans; annually test and evaluate the effectiveness of management and operational security controls; or sufficiently complete plans of action and milestones for mitigating known information security deficiencies. In addition, DLA has not implemented a fully effective certification and accreditation process for authorizing the operation of its information systems. Key reasons for these weaknesses are that responsibilities of information security employees were not consistently understood or communicated and DLA has not adequately maintained the accuracy and completeness of data contained in its primary reporting tool for overseeing the agency's performance in implementing key information security activities and controls. Until the agency addresses these weaknesses and fully implements an effective agency-wide information security program, it may not be able to protect the confidentiality, integrity, and availability of its information and information systems, and it may not have complete and accurate performance data for key information security practices and controls.
The nation’s nuclear weapons stockpile remains a cornerstone of U.S. national security policy. As a result of changes in arms control, arms reduction, and nonproliferation policies, the National Defense Authorization Act for fiscal year 1994 required that DOE develop a science-based Stockpile Stewardship Program to maintain the stockpile without nuclear testing. After this program was established, DOE, in January 1996, initiated the Stockpile Life Extension Program. The purpose of this program is to develop a standard approach for planning nuclear weapons refurbishment activities so that the nuclear weapons complex can extend the operational lives of the weapons in the stockpile by another 20 to 30 years. Within NNSA, the Office of Defense Programs is responsible for the warheads and bombs in the stockpile. This responsibility encompasses many different tasks, including the manufacture, maintenance, refurbishment, surveillance, and dismantlement of weapons in the stockpile; activities associated with the research, design, development, simulation, modeling, and nonnuclear testing of nuclear weapons; and the planning, assessment, and certification of the weapons’ safety and reliability. A national complex of nuclear weapons design laboratories and production facilities carries out the Office of Defense Programs’ mission. Three national laboratories in this complex design nuclear weapons: Lawrence Livermore National Laboratory in California, Los Alamos National Laboratory in New Mexico, and Sandia National Laboratories in New Mexico and California. For the B61 and W76 life extension programs, Los Alamos National Laboratory is responsible for designing and developing these weapons’ nuclear explosives package. Sandia National Laboratories design non- nuclear components, such as arming, fuzing, and firing systems, foams, and electrical cables, and test the weapons’ non-nuclear components to certify safety and reliability. Lawrence Livermore National Laboratory peer reviews design and production activities. Los Alamos and Sandia National Laboratories work closely with the production plants to ensure that components meet design specifications. The complex’s four production sites include the Y-12 National Security Complex plant in Tennessee, the Kansas City Plant in Missouri, the Savannah River Site plant in South Carolina, and the Pantex Plant in Texas. The Y-12 plant manufactures critical nuclear components, such as parts made from enriched uranium, for the nuclear explosives package. The Kansas City plant produces and procures nonnuclear parts and electronic components and manufactures the new arming, fuzing, and firing system for the W76 warhead. The Savannah River Site plant fills gas bottles it receives from Kansas City with tritium and deuterium, which are used to facilitate the nuclear explosion. Last, the Pantex plant assembles all components supplied by other production plants to produce a weapon for the stockpile. See figure 1 for a summary of this process. An end to underground nuclear testing in 1992 in the United States suspended the development of weapons with new, untested designs. This suspension created a shift away from the strategy of replacing older warheads with newer designs to a new strategy of retaining and refurbishing previously produced warheads indefinitely, without nuclear testing, and with no plans to replace the weapons. To manage this new strategy of refurbishing nuclear weapons, NNSA uses a process called Phase 6.X, which it jointly developed with DOD. This process consists of the following elements: Phase 6.1, concept assessment—conducting studies to provide planning guidance and to develop information so that a decision can be made on whether or not to proceed to phase 6.2. Phase 6.2, feasibility study—developing design options and studying their feasibility. Phase 6.2A, design definition and cost study—completing definition of selected design option(s) from phase 6.2 and determining the cost of pursuing the design option(s). Phase 6.3, development engineering—conducting experiments, tests, and analyses to validate the design option and assess its potential for production. Phase 6.4, production engineering—making a strong commitment of resources to the production facilities to prepare for stockpile production. Phase 6.5, first production—producing a limited number of refurbished weapons and then disassembling and examining some of them for final qualification of the production process. Phase 6.6, full-scale production—ramping up to full production rates at required levels. DOD oversees NNSA’s refurbishment activities through the military services’ Lead Project Officer and the Nuclear Weapons Council’s Standing and Safety Committee. The Air Force or the Navy appoint a Lead Project Officer to provide day-to-day oversight over NNSA’s activities. The Lead Project Officer meets regularly with officials from NNSA, the national laboratories, and production facilities to monitor progress and understand the technical challenges. The Nuclear Weapons Council Standing and Safety Committee (NWCSSC) advises and assists the Nuclear Weapons Council, which provides policy guidance and oversight of nuclear weapons stockpile activities and is required to report regularly to the President on the safety and reliability of the U.S. stockpile. Representatives from the following organizations make up the NWCSSC: NNSA; the Office of the Under Secretary of Defense for Policy; the Office of the Assistant Secretary of Defense for Networks and Information Integration; the Assistant to the Secretary of Defense for Nuclear, Chemical and Biological Programs; the Joint Staff; STRATCOM; the Army; the Navy; the Air Force; and the Defense Threat Reduction Agency. According to DOD officials, the Lead Project Officer regularly updates the NWCSSC on the status of refurbishment activities and proposes recommendations to the NWCSSC on whether NNSA should proceed to the next phase. NNSA needs approval from the NWC to proceed to Phases 6.2, 6.3, and 6.6. As of December 15, 2008, two nuclear weapons were undergoing phase 6.X refurbishment activities. The W76 warhead was in phase 6.5, first production unit, and the B61 bomb was in phase 6.6, full-scale production. NNSA originally planned to refurbish the W80 warhead and began phase 6.3, development engineering, but in 2007, NNSA cancelled refurbishment activities for the W80 warhead because DOD planned to reduce the number of W80 warheads in the nuclear stockpile. While complete cost data on the W80 warhead do not exist, NNSA spent about $480 million from fiscal years 2003 to 2007 on refurbishment activities for it. NNSA completed the refurbishment of the B61 bomb on schedule in November 2008. However, according to NNSA and DOD officials, NNSA was not able to meet all the refurbishment objectives because it established an unrealistic schedule and failed to fully implement its Phase 6.X process. NNSA was able to meet its refurbishment schedule and avoid significant cost overruns for the B61 only because (1) DOD changed some of the refurbishment objectives, (2) NNSA was able to reuse, rather than manufacture, a critical component for the B61, and (3) the Nuclear Weapons Council significantly reduced the number of B61 bombs in the stockpile. However, the refurbished B61 bombs still do not meet all refurbishment objectives. Some of the B61 refurbishment problems could have been avoided if DOD had fulfilled its roles and responsibilities in overseeing NNSA’s life extension program activities. Since parts of the B61 bomb were beginning to age, NNSA proposed, in 1999, to refurbish the first B61 by September 2004, with full-scale production ending in 2008. However, an NNSA study completed in 2001 by the national laboratories and production facilities found that they could not meet the September 2004 date given the requirements, production capabilities, risk assessments, and Phase 6.X guidelines. Instead, the national laboratories and production facilities concluded that they would need until September 2008—4 years later than the September 2004 date proposed by NNSA—to refurbish the first weapon. This proposed schedule was considered low risk because it allowed NNSA to follow the steps in the Phase 6.X process and included contingencies to address technical challenges. NNSA did not approve this schedule, however. It was concerned that the proposed production schedule for the B61 bomb would conflict with the refurbishment of the W76 warhead, which was originally scheduled for September 2007 and considered a DOD priority. NNSA wanted to complete production of the refurbished B61 bomb before beginning full-scale production of the W76 warhead because the production facilities, such as the Y-12 plant, had limited capacity. To allow the national laboratories and production facilities more time for design, engineering, and production activities while avoiding conflicts with the W76 life extension program, NNSA set a June 2006 date for the first refurbished B61 bomb. To meet this more aggressive and, as stated in NNSA’s program plan, “success-oriented” schedule, NNSA adopted a modified Phase 6.X process that compressed and overlapped the development engineering and production engineering phases, leaving little time to conduct the experiments, tests, and analyses needed to validate design options and to certify that production facilities that manufacture and assemble parts could meet design requirements. NNSA assumed that it would not need time for development and production engineering because it would reuse rather than manufacture critical materials—one of the most critical of which was a plastic. Before fully determining whether the plastic could be reused, NNSA developed a production schedule with fixed delivery dates. However, additional tests showed that NNSA could not reuse this material because it did not function properly under certain conditions. NNSA therefore decided to develop an alternative material with superior properties that would work under all conditions. Since NNSA did not include any cost or schedule contingencies in its baseline to address unforeseen technical challenges, development work on an alternative material posed a significant risk to meeting the program’s milestones and added $11 million to the program’s cost. NNSA was unable to produce a substitute that could retain the shape needed for the B61 bomb and would perform under all delivery conditions. NNSA’s effort to manufacture this alternative material resulted in significant schedule delays and cost overruns. In addition to a lack of sufficient time for development and production engineering work, NNSA’s B61 life extension program schedule did not include contingencies for testing failures. NNSA assumed that modeling and computational analysis would be sufficient to properly design a component and a physical test of the design would be successful, avoiding the need for follow-up tests. If a test revealed a problem with the design, NNSA would have had to conduct additional tests or change the design, which would have potentially increased cost and delayed the program. As it turned out, NNSA’s tests were not all successful, and the Air Force and Lawrence Livermore National Laboratory peer reviewers recommended delaying production and conducting additional tests to test the refurbished weapon. Nevertheless, NNSA proceeded with full-scale production to meet its schedule milestones. The Air Force’s most significant concern was that the testing of refurbished B61 bombs deviated substantially from the original testing plan that NNSA designed and DOD approved. NNSA subsequently conducted follow-on tests to address Air Force concerns. NNSA was able to meet its refurbishment schedule for the B61 only because the following occurred: NNSA sought and received a change in refurbishment objectives. In response to NNSA’s request, STRATCOM, which is responsible for developing and reviewing military mission requirements, reviewed the military needs for the B61. After STRATCOM reviewed its needs, NNSA was then able to abandon its attempt to develop an alternative material, which it could not successfully manufacture to meet requirements, and was able to reuse the original material in the B61 bomb. Dismantlement of decommissioned B61 bombs allowed NNSA to obtain the necessary material for the refurbished B61 bombs. Even though NNSA abandoned its attempt to develop an alternative material after refurbishment objectives changed, it still did not have the material it needed because NNSA no longer manufactured it. However, NNSA found material it could use in refurbished B61 bombs when it began dismantling tactical B61 bombs. As a result, NNSA was able to extract the material, which is used in both strategic and tactical B61 bombs. The Nuclear Weapons Council significantly reduced the number of B61s in the stockpile. Between 2003 and 2007, the Nuclear Weapons Council, which reviews the size of the nation’s stockpile, directed NNSA to reduce the total stockpile of nuclear weapons. Following the council’s stockpile plan, NNSA reduced the number of B61s that needed refurbishment by about two-thirds. According to officials from production facilities, NNSA would not have been able to meet its November 2008 completion date if it still had to refurbish the originally planned number of weapons. Moreover, NNSA would not have been able to meet its cost baseline because the cost of manufacturing each B61 had almost doubled. Even though these events allowed NNSA to meet its schedule, the refurbished B61 bombs do not meet all refurbishment objectives. To address DOD concerns, in December 2007, NNSA agreed to conduct additional tests. According to DOD officials, the additional tests NNSA planned should resolve these concerns if successful in meeting the test objectives. Some of the B61 refurbishment problems could have been avoided if DOD had fulfilled its roles and responsibilities in overseeing NNSA’s life extension program activities. First, DOD did not comprehensively review military requirements for the B61 bomb before starting refurbishment activities, which would have avoided unnecessary testing and manufacturing of the alternative material. Specifically, NNSA tested the B61 in conditions that it later learned were no longer used by DOD. In conducting its tests, NNSA was following DOD’s specifications to meet all of the weapon’s original requirements established in the 1960s. According to the Phase 6.X process, a critical military requirement, which NNSA relied on for its tests, should have been reviewed during the Phase 6.2/2A study during 2001 and 2002. Instead, 2 years elapsed before STRATCOM notified NNSA that the requirement was no longer necessary, and it took another 2 years—until March 2006—to finally change the requirement. As a result, NNSA dedicated time and resources to develop an alternative material and conducted tests following the requirement, which STRATCOM later criticized as being operationally unrealistic testing. Second, the Air Force did not adequately review NNSA’s design, engineering, and testing activities—a review that would have alerted it to the fact that NNSA was unable to meet all refurbishment objectives. According to Air Force officials, the Lead Project Officer failed to provide the necessary oversight because he lacked the technical and managerial expertise to do so. He did not alert the Air Force to significant concerns with the testing of the refurbished B61. In particular, the Air Force did not raise concerns about NNSA’s failure to complete all agreed-upon tests until NNSA had completed a majority of its tests and was preparing for full-scale production. After NNSA entered production, the Air Force required NNSA to conduct additional tests to provide a greater level of assurance that the refurbished B61 would perform as intended and last in the stockpile for at least another 20 years. As we noted, NNSA agreed to conduct additional tests and plans to complete them by the end of 2009. Importantly, these tests will be completed after all the B61 bombs now being refurbished are back in the stockpile. NNSA developed a risk mitigation strategy to avoid potential cost overruns and schedule delays related to the manufacture of Fogbank but failed to effectively implement it. As a result, NNSA’s original plans to produce the first refurbished W76 weapon in September 2007 slipped to September 2008. In addition, NNSA spent $69 million to address Fogbank production problems, and the Navy faced logistical challenges in replacing old W76 warheads with refurbished ones on submarines owing to the delay. Furthermore, NNSA did not use the same criteria and accounting practices each fiscal year to develop a cost baseline for the W76 program, which makes it difficult to track refurbishment costs over time. At the beginning of the W76 life extension program in 2000, NNSA identified key technical challenges that would potentially cause schedule delays or cost overruns. One of the highest risks was manufacturing Fogbank because it is difficult to manufacture. In addition, NNSA had lost knowledge of how to manufacture the material because it had kept few records of the process when the material was made in the 1980s and almost all staff with expertise on production had retired or left the agency. Finally, NNSA had to build a new facility at the Y-12 plant because the facilities that produced Fogbank ceased operation in the 1990s and had since been dismantled, except for a pilot plant used to produce small quantities of Fogbank for test purposes. To address these concerns, NNSA developed a risk management strategy for Fogbank with three key components: (1) building a new Fogbank production facility early enough to allow time to re-learn the manufacturing process and resolve any problems before starting full production; (2) using the existing pilot plant to test the Fogbank manufacturing process while the new facility was under construction; and (3) developing an alternate material that was easier to produce than Fogbank. However, NNSA failed to effectively implement these three key components. As a result, it had little time to address unexpected technical challenges and no guaranteed source of funding to support risk mitigation activities. After determining that 2 years was sufficient time to test and perfect the Fogbank manufacturing process, NNSA set March 2005 as the target date to begin operations of the new facility at the Y-12 plant and worked backward from that date to establish a design, build, and test schedule for the new facility, according to the official in charge of the project. Working from lessons learned from the W87 life extension program, NNSA strove to achieve an early operations start date to allow sufficient time to address any potential problems in manufacturing Fogbank. In 2000, we reported that production problems resulting from such factors as restarting an atrophied production complex and addressing safety and technician training issues led directly to slippage in the W87 life extension program schedule and contributed to increased costs. In addition, NNSA’s own lessons learned report on the W87 program identified the need to demonstrate processes early and often and stated that, with limited resources, assumptions such as “we did it before so we can do it again” are often wrong. NNSA started the new facility’s operations about 1 year late because the schedule for building the facility was unrealistic, disagreements on the implementation of safety guidelines emerged, and the W76 program manager lacked authority to control the schedule. Focused on meeting an operations start date of March 2005, NNSA developed an aggressive construction and operation start schedule with no contingency for cost overruns or schedule delays. This schedule increased risk to meeting the program schedule because any delay would leave less than 2 years to conduct test production runs, which NNSA determined were necessary for perfecting the process. In addition, the Fogank facility was the first new manufacturing facility to be built at Y-12 in 30 years; therefore, a lack of recent experience with construction project management and implementing safety guidelines heightened the potential for problems. In fact, the contractor building the facility underestimated the time needed to complete preparations for start-up, including training and certifying staff to use the equipment and calibrating instruments. In addition, NNSA and the contractor disagreed on the interpretation and implementation of safety guidelines. A lack of clarity about which guidelines would apply and the proper interpretation of the guidelines caused confusion over the course of the project. At a late stage, NNSA directed the contractor to apply more conservative nuclear facility safety requirements. As a result, the contractor needed additional time to address safety concerns by, for example, installing weather- and earthquake-proof equipment. When these issues emerged, the W76 NNSA program manager did not have the authority to manage the construction of the project or resolve the dispute over safety guidelines even though a key risk mitigation strategy was the timely start of facility operations. Construction and start-up of the facility was managed by Y-12, which reported to the Y-12 Site Office, a separate organization not under the authority of the program manager. As soon as the March 2005 new facility start date was missed, the program manager raised concerns and elevated them to the Deputy Administrator for Defense Programs, the cognizant management organization at NNSA headquarters, but the issues remained unresolved. Ultimately, start-up of the new facility was postponed by approximately 1 year, leaving NNSA with half the time originally planned to re-learn the Fogbank production process. NNSA planned to use the Y-12 pilot plant to gain a better understanding of Fogbank properties and to test the production process on a small scale while the new facility was under construction. The pilot facility could only produce a small amount of Fogbank for the W76 program because it had only a few machines. Although NNSA used the pilot plant from 2000 to 2003, it did not have funds to continue the effort because it shifted money from the W76 program to support higher priority programs at the time, such as the W87 and B61 life extension programs. However, in 2004, anticipating delays in starting operations at the new facility and recognizing the importance of continuing work at the pilot plant, NNSA provided funding to pay for additional work at the pilot plant. By completing this work, NNSA learned that certain techniques significantly affected the quality of the end product and made adjustments to meet requirements. However, NNSA did not conduct as much work as originally planned and missed opportunities to learn more about the manufacturing process before starting operations. In 2000, NNSA considered replacing Fogbank with an alternate material that was less costly and easier to produce but abandoned the idea because NNSA was confident that it could produce Fogbank since it had done so before. In addition, LANL’s computer models and simulations were not sophisticated enough to provide conclusive evidence that the alternate material would function exactly the same as Fogbank. Still further, the Navy, the ultimate customer, had expressed a strong preference for Fogbank because of its proven nuclear test record. In response to the Navy’s preference and the lack of sufficient test data on the alternate material, NNSA did not pursue the development of an alternate material until 2007. In March 2007, however, NNSA again considered producing an alternative material when it was unable to produce usable Fogbank and was facing the prospect of significant schedule delays. Computer models and simulations had improved since 2001, enabling greater confidence in the analysis of alternate materials. Thus, NNSA began a $23 million initiative to develop an alternate material. LANL officials told us that NNSA plans to certify the use of the alternative material by the end of 2009 for the W76 warhead and if NNSA faced additional Fogbank manufacturing problems during full-scale production, the alternate material could then be used instead of Fogbank. Had NNSA continued research and development of an alternate material during the program, it would have had more information on the viability of using the alternate material in the weapon before March 2007. This additional information also might have provided the Navy greater assurance that an alternate material performed as well as Fogbank. A failure to implement the three components of NNSA’s risk management strategy for Fogbank led to a 1-year schedule delay and a $69 million cost overrun. This cost overrun included $22 million to resolve Fogbank production problems, $23 million to develop the alternate material, and $24 million to maintain Pantex’s production capabilities. Regarding Fogbank production problems, in March 2007, NNSA discovered that final batches of the material had problems. To address the problems and try to meet its September 2007 date for producing the first refurbished weapon, NNSA launched a major effort—“Code Blue”—that made the manufacture of Fogbank a priority for the design laboratories and production facilities. However, this effort failed, and, as a result, NNSA delayed producing the first refurbished weapon from September 2007 to September 2008, and it began its efforts to develop an alternate material to Fogbank. Finally, while Pantex was unable to begin assembling refurbished units in September 2007 as planned, it still spent $24 million in fiscal year 2008 to remain in “stand-by” mode, which includes maintaining the skills of the technicians who will assemble refurbished W76 weapons. The 1-year delay led to logistical challenges for the Navy and an aggressive production schedule of refurbished W76 warheads to make up time. The Navy originally planned to start replacing old W76 warheads with refurbished ones on submarines in April 2008. However, owing to W76 production delays, the Navy had to replace aging parts of W76 warheads in its current arsenal and has had to delay replacing old warheads with newly refurbished weapons until April 2009. Furthermore, to make up for initial schedule setbacks caused by Fogbank production problems, NNSA has increased the rate at which it plans to produce refurbished W76 weapons. NNSA will produce more weapons per year than originally planned, an annual increment that over time will enable it to still finish production at the originally planned end date. However, a higher rate of production requires more resources and leaves less room for error because any slowdown will have a greater impact on the larger number needed to be produced. NNSA production officials have indicated that they may not be able to meet this more compressed schedule if they do not receive extra resources or if they encounter any production problems, both considered realistic possibilities. NNSA does not have a consistent approach for developing a cost baseline for the W76 program. NNSA has changed its baseline almost every year since 2001 to reflect changes in the number of warheads needed in the stockpile and changes in NNSA reporting guidelines. For example, in fiscal year 2004, the cost estimate for the W76 program was $2.1 billion; in fiscal year 2005, it was $6.2 billion; and in fiscal year 2006, it was $2.7 billion (see fig. 2). Changes in the baseline were the result of changes in the percent of the stockpile to be refurbished, which ranged from 25 percent to 86 percent. As the number of weapons to be refurbished changed, the baseline moved correspondingly because it costs more to refurbish more weapons. For example, NNSA planned to refurbish significantly more weapons in 2005 than 2004, based on official guidance, accounting for part of the $4.1 billion differential between those years. Significant changes in the baseline were also driven by inconsistent NNSA accounting practices. For example, in fiscal year 2005, NNSA required program managers to include all indirect costs, such as the overhead costs of operating facilities, as well as direct costs in the baseline. The next year it dropped this requirement. Prior to fiscal year 2005, NNSA did not tie overhead costs to specific weapon systems. However, in an attempt to provide a more accurate estimate of total costs by weapon, NNSA created accounts for the W76 warhead that captured a pro-rated portion of general costs, such as research and production support at the laboratories and production facilities. For example, NNSA included the pro-rated cost of forklift operators, who load and unload trucks for all weapon systems. Thus, a portion of these overhead costs was added to the 2005 baseline to better account for the full the costs of the program. However, NNSA discovered that this approach constrained flexibility. If priorities shifted and changes needed to be made to overhead activities, resources could not be easily redirected to different weapon systems. Any change would require congressional approval because such overhead costs were tied to a specific weapon system as a budget line item. Consequently, in fiscal year 2006, NNSA reported the production and research support accounts separately. While this change restored some flexibility for overall NNSA complex management, the transition reduced clarity about the total cost of a weapon system. Accounting changes have persisted, with, for example, some baseline years including large expense items, such as employee benefits, and other baseline years excluding such costs. A lack of a consistent baseline approach with similar cost assumptions and criteria makes it difficult to track the costs of the program over time and determine how well NNSA develops cost estimates. Refurbishing the nuclear weapons stockpile is a difficult task. NNSA must draw on the scientific expertise of the nuclear weapons laboratories and the manufacturing and engineering expertise of the nuclear weapons production facilities. Recognizing this challenge, NNSA and DOD have developed multiple tools for managing the refurbishment effort: Phase 6.X, risk management strategies, test and evaluation plans, and a lessons learned document from the W87 life extension program. By selectively using these guidance documents, however, NNSA has incurred significant cost increases and schedule delays that it could have avoided. In addition, NNSA did not include any cost or schedule contingencies in its baseline to address the unforeseen technical challenges that arose. If NNSA had more carefully followed the Phase 6.X process, it might have had sufficient time in its schedule to develop and test key materials that it had not manufactured in decades and address unforeseen technical issues. Moreover, NNSA did not fully implement its risk management strategy to address one of the highest risks to the W76 life extension program—the manufacturing of Fogbank. If NNSA had effectively implemented its risk management strategies, schedule delays and cost increases might have been avoided or mitigated. Most importantly, if NNSA had started operations of the new facility on schedule, it would have had more time to address manufacturing challenges. In fact, the 1-year delay in the startup of the new Fogbank facility corresponded almost exactly to the 1-year program delay. In addition, without the authority to control the construction and start of operations of the new facility, the W76 program manager could not help resolve the disagreement over the safety regulations needed at the facility. Potentially compounding these problems, NNSA committed to an ambitious production schedule to make up for delays related to Fogbank—a schedule that does not leave time to address any future production problems. Furthermore, NNSA cannot be held accountable to meeting its cost targets without a consistent approach in developing a cost baseline for the W76 program. The ability to track cost over time and assess how well an agency holds to a cost baseline is fundamental for proper management and oversight. Finally, because DOD failed to adequately oversee the B61 refurbishment program, as Phase 6.X requires, NNSA spent unnecessary time and money trying to find an alternative material. In addition, because the Lead Project Officer for the B61 bomb did not adequately monitor NNSA’s activities during critical phases or have the technical expertise to do so, the Air Force did not have sufficient time to ask NNSA to conduct additional tests before NNSA entered full-scale production. All of these management issues raise significant questions about NNSA’s ability not only to complete life extension programs on time and on budget that meet all refurbishment objectives, but also its ability to manage the design and production of new weapons, such as the proposed reliable replacement warhead. NNSA and DOD state that the reliable replacement warhead is a way to replace the nation’s aging stockpile with a safer, more reliable, and more secure warhead than those currently in our stockpile, and plan to use the Phase 6.X process to design and manufacture this warhead. Because NNSA did not properly follow the Phase 6.X process, meet all refurbishment objectives for the B61 bomb, and conduct all planned tests, it raises questions about NNSA’s ability to design a new weapon that meets DOD’s needs and also provides sufficient confidence to DOD that a new weapon will perform as expected without conducting underground nuclear tests. In addition, NNSA’s failure to implement its risk mitigation strategy for the highest risk to the program and implement lessons learned from prior life extensions, like the W87 warhead, does not inspire confidence in its ability to achieve the program’s goals on time and on budget. To improve the management of the stockpile life extension program, we recommend that the Administrator of NNSA direct the Deputy Administrator for Defense Programs to take the following six actions: Develop a realistic schedule for the W76 warhead and future life extension programs that allows NNSA to (1) address technical challenges while meeting all military requirements and (2) build in time for unexpected technical challenges that may delay the program. Assess the cost and include funding in the baseline for risk mitigation activities that address the highest risks to the W76 and future life extension programs. Before beginning a life extension program, assess the risks, costs, and scheduling needs for each military requirement established by DOD. Ensure that the program managers responsible for overseeing the construction of new facilities directly related to future life extension programs coordinate with the program managers of such future programs to avoid the types of delays and problems faced with the construction and operation of the Fogbank manufacturing facility for the W76 program. Ensure that program managers for the construction of new facilities for future life extensions base their schedule for the construction and start-up of a facility on the life extension program managers’ needs identified in their risk mitigation strategies. Develop and use consistent budget assumptions and criteria for the baseline to track costs over time. To improve DOD’s oversight over NNSA’s life extension activities and ensure that refurbished weapons meet all military requirements, we recommend that the Secretary of Defense take the following three actions: Direct STRATCOM and the Secretary of the responsible Service to comprehensively review military requirements for a weapons system prior to entering Phase 6.2A of a life extension program. Direct STRATCOM and the Secretary of the responsible Service to work with NNSA to assess the cost and schedule implications for meeting each military requirement prior to entering Phase 6.3. Direct the Secretaries of the Air Force and the Navy to ensure that their respective Lead Project Officers have the technical and managerial expertise and resources to review NNSA’s progress and technical challenges throughout the life extension program. We provided NNSA and DOD with draft copies of our classified report for their review and comment. In addition to their official comments, which are reprinted in appendixes I and II, NNSA and DOD provided technical comments, which we incorporated as appropriate. As discussed in our classified report, NNSA agreed with our recommendations and plans to take a number of steps to implement them. First, NNSA plans to assess the risks, costs, and scheduling needs for each military requirement DOD establishes during the early phases of a life extension program. NNSA will consult officials from the production facilities to better understand the potential impact on cost and schedule of manufacturing critical nuclear and non- nuclear materials. In addition, NNSA plans to adopt an Integrated Phase Gate process that establishes well-defined milestones, or gates, throughout the Phase 6.X process. Before proceeding to the next gate, NNSA and DOD officials must identify any risks to cost and schedule and can opt to delay the life extension program if the risks are too high and additional actions, such as testing, should be taken. Second, NNSA will include funding needs for risk mitigation activities that address the highest risks to future life extension programs in budget reports to Congress. Third, NNSA plans to better coordinate construction activities at the production facilities with the needs of life extension program activities. Last, according to NNSA, it developed a methodology to establish a baseline with consistent budget assumptions and criteria to track costs over time. We believe that these actions could significantly improve the management of the life extension program. DOD partially agreed with our recommendations. DOD agreed with our two recommendations directed at the department, but asked us to make modifications to the language of the recommendations to better target the responsible service or agency that has authority to implement them. We modified our recommendations by (1) including the Department of the Navy because it is responsible for reviewing NNSA’s refurbishment activities for certain nuclear weapons, such as the W76, and (2) specifying during which phase of the phase 6.X process DOD should comprehensively review its military requirements and assess the cost and schedule implications for meeting each military requirement. DOD also expressed concern that the report placed an undue burden of responsibility for program delays for the B61 life extension program on DOD and that there were other technical issues NNSA faced that were not discussed in this report that led to program delays. We believe that our report fairly attributes management problems with the B61 life extension program to both NNSA and DOD. As we state in the report, NNSA did not include any cost or schedule contingencies in its baseline to address unforeseen technical challenges in refurbishing the B61 bomb, and its aggressive schedule posed a significant risk to meeting the program’s milestones. This report did not address all of the technical challenges that NNSA faced in refurbishing the B61 bomb because some did not have an impact on cost and schedule and others were additional examples of problems NNSA faced by compressing the development and engineering schedule. As we noted on page 3 of this report, the scope of our discussion of the B61 was limited to the most significant technical challenge that had an impact on cost, schedule, and weapon performance and reliability—the decision to reuse or manufacture a new material for a critical component. NNSA had the burden of completing the refurbishment on time and on schedule, but DOD failed to provide the necessary oversight. Last, we recognize that the Air Force has taken steps to strengthen the management and oversight of nuclear activities, such as consolidating nuclear activities under a newly established Air Force Nuclear Weapons Center. However, it is too early to assess the impact these actions have had on the Air Force’s oversight of the life extension program. We are sending copies of this report to the Secretary of Energy, the Administrator of NNSA, the Secretary of Defense, and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact named above, Marc Castellano, Leland Cogliani, Jonathan Gill, Marie Mak, James Noel, Omari Norman, Tim Persons, Carol Herrnstadt Shulman, and John Smale made significant contributions to this report.
As a separately organized agency within the Department of Energy, the National Nuclear Security Administration (NNSA) administers the Stockpile Life Extension Program, whose purpose is to extend, through refurbishment, the operational lives of the weapons in the nuclear stockpile. NNSA encountered significant management problems with its first refurbishment for the W87 warhead. GAO was asked to assess the extent to which NNSA and the Department of Defense (DOD) have effectively managed the refurbishment of two other weapons--the B61 bomb and the W76 warhead. This report summarizes the findings of GAO's classified report on the refurbishment of the B61 bomb and W76 warhead. NNSA and DOD have not effectively managed cost, schedule, and technical risks for either the B61 or W76 life extension program. Regarding the B61 program, although NNSA completed the refurbishment of the strategic variants of the B61 bomb--the Mods 7 and 11--on schedule in November 2008, the refurbished weapons do not meet all refurbishment objectives. According to NNSA and DOD officials, NNSA established an unrealistic schedule and failed to fully implement its refurbishment guidance, known as the Phase 6.X process. NNSA was able to meet its refurbishment schedule and avoid significant cost overruns for the B61 program only because (1) some of the refurbishment objectives were changed, (2) NNSA was able to reuse, rather than manufacture, a critical component when B61 bombs were decommissioned, and (3) the Nuclear Weapons Council significantly reduced the number of B61 bombs in the stockpile. Despite DOD concerns about the adequacy of NNSA testing of the B61 bombs under certain conditions, NNSA continued refurbishing the weapons. Some of the B61 refurbishment problems could have been avoided if DOD had fulfilled its roles and responsibilities in overseeing NNSA's life extension program activities. For example, the Air Force did not adequately review NNSA's design, engineering, and testing activities--a review that would have alerted DOD that NNSA was missing some of its refurbishment objectives. Regarding the W76 program, NNSA did not effectively manage a high risk associated with manufacturing an essential material, known as Fogbank, needed to refurbish the W76 warhead. NNSA had developed a risk mitigation strategy to avoid potential cost overruns and schedule delays related to the manufacture of this key material but failed to effectively implement this strategy. As a result, NNSA's original plans to produce the first refurbished W76 weapon in September 2007 slipped to September 2008; NNSA spent $69 million to address Fogbank production problems; and the Navy faced logistical challenges owing to the delay. Furthermore, NNSA did not have a consistent approach to developing a cost baseline for the W76 program, which makes it difficult to track refurbishment costs over time and to know the actual cost of the program.
MCC is managed by a chief executive officer (CEO), appointed by the President with the advice and consent of the Senate, and is overseen by a Board of Directors. The Secretary of State serves as board chair and the Secretary of the Treasury serves as vice-chair. MCC’s model is based on a set of core principles deemed essential for effective development assistance, including good governance, country ownership, focus on results, and transparency. According to MCC, country ownership of an MCC compact occurs when a country’s national government controls the prioritization process during compact development, is responsible for implementation, and is accountable to its domestic stakeholders for decision making and results. In keeping with the MCC principle of country ownership, MCC enters into a legal relationship with partner country governments. During the 5-year compact implementation period, the partner government vests responsibility for day-to-day management, including monitoring and evaluation of the progress of compact projects, to an accountable entity established to implement the compact (an entity’s name is usually formed from “MCA” plus the country’s name—for example, MCA-Benin). MCC provides the framework and guidance for compact implementation, monitoring, and evaluation that MCAs are to use in implementing compact projects. Following the compact end date, the partner government must close the program within 120 days (the closure period). During the closure period, MCC funds may be used only for project goods, works, or services incurred before the compact end date, or for closure expenses. For example, the government may expend MCC funds to settle final invoices and claims, secure unfinished project sites against potential health or safety hazards, prepare final reports, and conduct other activities specified in MCC’s closeout guidelines. However, the government may not expend MCC funds to undertake or continue activities that were planned for completion within the compact term, including expenses for activities such as completion of works, supervising engineer services, and consulting services. MCC places several requirements on MCAs to ensure proper management and quality assurance of MCC-funded infrastructure projects. These requirements create a quality assurance framework for infrastructure projects that requires that each MCA have an individual project director—for example, a roads director—who oversees the activities of the other actors, including outside implementing entities or project management consultants, construction supervisors, and construction contractors. Project management consultant/implementing entity: Before receiving project funding, MCC requires the MCAs to engage the services of a project management firm or an implementing entity to help manage administrative aspects of compact projects. Construction supervisor: MCAs contract with construction supervisors to conduct oversight of day-to-day construction and the activities of the construction contractors to ensure compliance with contract requirements. Construction supervisors play an important role in ensuring construction quality by performing such tasks as approving construction materials, overseeing testing, and inspecting completed work. Construction contractor: MCAs contract with construction firms to build the project. The construction contractor is also responsible for controlling the quality of its work, which involves, among other tasks, material and construction testing. In general, MCAs deliver infrastructure projects through a design-bid-build approach in which the MCA contracts with a design engineer to develop technical plans and specifications that are used by a construction contractor, hired under a separate MCA procurement action, to build the project. In some cases, project designs already exist and MCAs do not engage a design engineer in implementing the project. In other cases a design-build approach is used and MCA contracts with a contractor that becomes responsible for both project design and construction. In addition, MCC may hire an independent engineer to assist in overseeing the progress of construction as managed by the MCAs and executed by their contractors. The objective of engaging an independent engineer is to obtain high-quality technical support in order to strengthen MCC’s ability to better assess the quality of ongoing program activities and to make better informed judgments about the status of ongoing activities, particularly where assessment of project activities affects MCC’s ability to further disburse funds. An independent engineer may also provide technical input on program decisions and documents submitted by MCC partner-country counterparts and help MCC ensure that funds are being spent according to the conditions and frameworks established in the compact. Figure 1 depicts the oversight, management, and contractual relationships among MCC, the MCA, and their contractors for infrastructure projects. MCC takes steps to ensure the sustainability of the projects it funds during both the design and implementation phases. First, MCC compacts are to be designed so that projects are sustainable for about 20 years, or as appropriate for the structure. Also, during the compact development process, MCC assesses the mechanisms in place to enhance sustainability, including a partner country’s policies and practices that will enable MCC investments to continue to provide benefits. For instance, as part of compact proposals submitted to MCC, partner countries are required to identify risks to project sustainability and describe the measures needed to ensure that project benefits can be sustained beyond the period of MCC financing. Partner countries are to consider a number of issues affecting sustainability, including environmental sustainability; institutional capacity for operations and maintenance; and, for proposed infrastructure projects, recent funding, performance, and expected expenses for operations and maintenance. During compact implementation, MCC tracks progress against key policy reforms and institutional improvements that were included as conditions in the compact to enhance project impact and sustainability. Such conditions in an agreement are known as conditions precedent, which must be met by one party before a second party to the agreement can perform or do its part. In the case of an MCC compact, MCC establishes conditions precedent that must be met by the partner government or MCA before financial disbursements are made. For example, MCC may require that the government increase its budget allocation for road maintenance before releasing final payments. For the purposes of this report, transportation infrastructure comprises public works that provide the conveyance of passengers or goods from one place to another. It includes structures such as roads, seaports, airports, and railways. Such projects may take years to plan and implement. For example, typical highway projects in the United States can take from 10 to 15 years for planning, design, and construction. Transportation infrastructure construction contracts may contain a defects liability clause that obligates a contractor to repair or rectify defects in the construction for a set period after the construction supervisor has deemed the works substantially complete. In a construction agreement, a contractor’s main obligation is to carry out the works to final completion, free of defects and to the standard set out in the agreement. A defects liability clause is intended to supplement this obligation by ensuring that the contractor remedies any defective work that becomes noticeable during the defects liability period, usually 1 year. The clause also provides a mechanism for repairing defects that may arise during the defects liability period. In Georgia, MCC funded the rehabilitation of about 217 kilometers of road linking the previously isolated Samtskhe-Javakheti region with Tbilisi, the country’s capital. However, the urgency to meet fixed time frames resulted in problems implementing the quality assurance framework and led to construction defects in parts of 5 of the 11 road lots. Furthermore, while MCC took steps to ensure the road project’s sustainability, the Georgian government has demonstrated limited ability to keep the road operational and maintained up to this point. MCC signed a compact with the Republic of Georgia in September 2005 to stimulate growth in regions outside Tbilisi where more than 40 percent of the country’s total population resides. A rough asphalt road before the compact, the Samtskhe-Javakheti Road was in such disrepair it prevented residents in the region from easily reaching Tbilisi. The purpose of the rehabilitation was to improve transportation for regional trade to increase exports from the region; increase social, political, and economic integration of the people in the region with those in the rest of Georgia; expand international trade by providing a more direct link from Tbilisi and eastern and southern Georgia to Turkey and Armenia; develop the tourism potential of Vardzia, a 13th century rock-cut monastery. MCC originally granted $295.3 million for the compact’s two projects— Enterprise Development and Regional Infrastructure Rehabilitation, which included the Samtskhe-Javakheti Roads Rehabilitation activity (see fig. 2). In November 2008, after Georgia’s war with Russia over South Ossetia, MCC increased the compact by $100 million. The compact entered into force in April 2006 and ended in April 2011. MCC originally planned to rehabilitate 245 kilometers of existing road at a cost of $102.2 million (or $417,000 per kilometer), but after several changes to the project’s scope, rehabilitated about 217 kilometers at a cost of about $212.9 million (or $981,000 per kilometer). The road project’s length was first reduced after the initial contract solicitation attracted bids that exceeded the amount of funding originally available for the road work. As a result, the project was divided into shorter sections and contracts were let for about 170 kilometers of road. In the winter of 2008-2009, after MCA-Georgia allocated an additional $60 million to the road project, about 50 kilometers of road were added to the project (see fig. 3). MCA-Georgia also reallocated an additional $50.7 million from other activities to the road project between May 2008 and January 2011 to cover additional cost increases, including costs to accelerate work to ensure its completion before the end of the compact. The road was rehabilitated at an increased cost in a compressed construction time frame because of insufficient planning, work added late in the compact, and poor performance by one contractor. Because of the compressed construction time frames, MCA-Georgia’s construction supervision and construction contractors had difficulty fully implementing the quality assurance framework. In addition, problems identified by MCC’s independent engineer were not adequately addressed. As a result, repair work remained at the end of the compact and the quality of construction varied across the lots. Although some infrastructure, such as the bridges, appeared to be well built, parts of 5 of the 11 lots— representing about 60 percent of the kilometers rehabilitated—had noticeable pavement deterioration and other defective structures. The extent of the defects varied among the lots, with some lots requiring pavement surface sealing or a relatively small amount of patching. However, the road in one lot was planned to be entirely repaved. As of March 2012, work was ongoing. (Figure 4 shows how the road was divided into lots and handled by different contractors.) MCA-Georgia awarded most of the final construction contracts with 2 years or less before the compact end date because of planning delays, work added late in the compact, and poor performance by one contractor. See figure 5 for a timeline of the compressed time frame under which the road rehabilitation occurred. Insufficient planning delayed construction: MCC reports that conducting feasibility studies and preparing designs and bid documents took over a year of compact time. In addition, a lack of accurate cost estimates resulted in a delay of 8 to 10 months in the first contract’s award. In April 2007, MCA-Georgia made its initial procurement for two road contracts and found that the project cost was greater than estimated and exceeded the funds available for the road work. As a result, it removed about 75 kilometers from the scope, revised the project into smaller lots, conducted a new procurement, and awarded contracts for lots 2, 3, and 4 (about 120 kilometers total) to contractor A in March 2008 and for lots 5i, 5ii, and 6i (about 50 kilometers total) to contractor B in May 2008—23 and 25 months after the compact entered into force, respectively. New work was added when additional funding became available late in the compact implementation period: In November 2008, MCC made additional funds available for the road project. The following spring, 3 years after the compact entered into force, MCA-Georgia awarded three additional road contracts (lots 1, 6ii, years remained under the compact to complete the work. and 7). At this point, only about 2 Poor performance by one contractor delayed implementation by about a year: Contractor A failed to meet its contractual obligations. After removing segments from contractor A’s scope of work in July and December 2009 and awarding them to other contractors, MCA-Georgia terminated the contract in August 2010. While reassigning the work from contractor A to the other contractors allowed the contract to be completed before the end of the compact, MCC officials reported that the process cost MCA-Georgia about $45 million more than the original $65 million contract and added at least 1 year of construction time. Lot 6iii was added as an addendum to the contract for lot 6ii in August 2009. MCA-Georgia provided a notice of nonperformance to contractor A in April 2009. That July, MCA-Georgia removed lot 4 from the contract (48 kilometers) and awarded it to another contractor through a limited procurement process. By using the quicker limited procurement process, MCA-Georgia hoped to take advantage of time remaining in the 2009 construction season and improve the likelihood of getting the work completed in the 21 months remaining in the compact. In December 2009, contractor A’s performance was still a problem, and MCA-Georgia removed an additional 15 kilometers from the contract (lot 3A). According to MCC officials, MCA-Georgia re- awarded this work through a full, competitive procurement process. As a result, the procurement took more than 4 months, mostly over the winter season, which left about 1 year to complete the work. In August 2010, MCC terminated the contract with contractor A for the remaining 57 kilometers of road (work for this section was about 80 percent complete, according to MCC’s independent engineer). With only 8 months left before the compact was to end—and most of those being winter months—MCA-Georgia removed about 4 kilometers from the project and re-awarded the other 53 kilometers using a limited procurement process so that work could begin immediately. According to MCC, MCA-Georgia paid contractor B $31.8 million to complete the work before the compact’s April 2011 deadline. An independent adjudicator found the additional cost for completing the work to be within the bounds of what may reasonably be expected in such circumstances. The MCC-required quality assurance framework was in place, but issues identified by MCC’s independent engineer—the technical advisor MCC hired to assist in overseeing the progress of construction—were not always addressed. Specifically, the contractors did not always perform their quality control responsibilities, the construction supervision firm had insufficient staff to conduct its work, and MCA-Georgia did not always use the construction supervisor as set out in the quality assurance framework. Contractors did not always conduct quality control activities: The contractors did not always fulfill their contractual quality control role, according to MCC officials and MCC’s independent engineer’s reports. For example, MCC’s independent engineer reported that some contractors continued work in less-than-favorable conditions, such as cold and rainy weather, to complete the work before the end of the compact. Conducting work in these conditions can cause problems with curing concrete (such that it does not reach its design strength) or with asphalt raveling (not bonding to other asphalt layers). In addition, to meet time frames, much work was completed at night when poor lighting, less inspection, and colder temperatures made it more difficult to perform high-quality work. Finally, one contractor did not supply a required quality assurance plan. Without the contractor’s quality assurance plan for the specific contract, the construction supervisor did not know when the contractor would be testing materials or who to contact regarding identified problems. In addition, the same contractor turned in test reports after the tested work had already been covered by subsequent stages of work. As a result, if the construction supervisor found that the earlier work was defective, the contractor would have to remove the subsequent work to repair it. Construction supervisor had insufficient staff: Quality control errors by the contractors should have been caught by the construction supervisor, but the construction supervisor had insufficient numbers of staff to adequately implement the quality assurance framework, according to MCC’s independent engineer. MCA-Georgia did increase the contract supervision, but not to the independent engineer’s recommended level. While MCC had taken steps to try to ensure sufficient supervision of the construction, it did not have authority to enforce the independent engineer’s recommendations. MCC’s independent engineer, accompanied by MCC officials, visited the road project and provided written reports almost quarterly between February 2009 and November 2010—the compressed construction time frames under which most of the roadwork occurred. The reports stated that there were not enough construction supervision staff, and, in four of those reports, the independent engineer advised that the supervisory situation was jeopardizing the project’s quality and success. A project management official told us that, because of the insufficient number of staff, the construction supervisor did not observe some quality testing that was done by the construction contractors, as required in its contract. In February 2009, MCA-Georgia increased the number of construction supervision staff to oversee the three lots added to the project’s scope. In addition, MCA-Georgia hired a separate construction supervisor to oversee lot 3A when it was created, increasing the staff available overall for construction supervision on the project. However, the MCC independent engineer recommended that additional construction staff were still needed to ensure the quality of the work under way. The independent engineer also reported that the construction supervisory firm’s fee was lower than typical for this type of international work. Nonetheless, MCA-Georgia chose not to fund additional staff for the construction supervision firm. MCC had included a condition precedent in the compact that required MCA-Georgia to engage a construction supervisor. However, because the condition was satisfied once MCA-Georgia engaged a supervisor, MCC stated that the condition precedent did not give it any authority to withhold funds because of insufficient supervision staffing. Construction supervisor was not always used in accordance with the quality assurance framework: According to two reports by MCC’s independent engineer, MCA-Georgia did not always use the construction supervision staff effectively. According to the quality control framework in the construction firms’ contracts, the construction supervisor should issue all instructions to the contractors. However, MCC’s independent engineer noted that the MCA-Georgia staff responsible for the road project communicated directly with construction contractors and issued oral instructions directly to the contractors to accelerate work. The independent engineer further noted that this practice could lead to claims of additional work, increasing costs, because contractors received different instructions from the construction supervisor and MCA-Georgia. The MCC project generally improved the road by reducing its driving time and roughness and installing bridges that appeared well built. However, several sections of the road had pavement defects and structures such as drainage systems and retaining walls that are deteriorating. The MCC project improved the condition of the road. Before rehabilitation, the Samtskhe-Javakheti road was passable but rough, with a driving time of about 8¼ hours. The project decreased the road’s roughness, and after rehabilitation, the same trip could be made in about 2¾ hours. MCC reported roughness measurements before rehabilitation that indicated a vehicle would have to drive under about 30 miles per hour for passengers to ride comfortably. Road roughness measurements made after project completion indicated that passengers could ride comfortably even at speeds of 75 miles per hour. We traveled a portion of the road that was bypassed by the rehabilitation project and found it to be filled with potholes and patches. We found the new road was much smoother (see fig. 6). The scope of the project also included rehabilitating or rebuilding 27 bridges (see fig. 7). In March 2012, MCC’s independent engineer reported that all bridges were performing well and had only a few defects such as erosion of embankment slopes, and incomplete guardrail hardware. We observed only minor defects such as a small chip in a bridge beam but no apparent quality problems. While the pavement in some lots appeared to be in good condition, the pavement in other lots was not. We observed that the pavement in road lots 1 and 7 was smooth with minimal defects. However, we found pavement deterioration in parts of 5 of the 11 road lots. The amount of deterioration and completed repair work varied in those lots, which constituted about 60 percent of the total kilometers of the final project. In lots 4 and 6ii, surface deterioration had been treated with a surface sealer to keep the surface from deteriorating further (see fig. 8). While seal coating may keep water from entering the cracks and make the road look better in the short term, it does not add pavement strength. As a result, deterioration will continue under the anticipated increased traffic loads for the project if the underlying cause of the cracking is not repaired. Lots 2 and 3 had undergone patching, but in one case the patch failed and the deterioration had continued beyond the patch, as shown in figure 9. In other places, we found continued cracking in need of repair, as shown in figure 10. This type of cracking—known as alligator cracking—is caused by fatigue failure of the asphalt surface, which is related to weakened layers of asphalt beneath the pavement, insufficient pavement thickness, excessive loading, or some combination of these factors. In lot 3A, much of the pavement was failing and under repair. The contractor was in the process of milling the top layer of pavement in some areas and full-depth patching of the pavement in other areas (see fig. 11). In June 2011, MCC’s independent engineer noted that the entire lot 3A section of road (15 kilometers) had been constructed poorly and had moderate to severe levels of distress in the pavement, which indicated that a poor quality of asphalt had been used. The construction supervisor stated that the contractor paved the road in lot 3A in two layers and that the second layer was paved when the weather was rainy to complete the project on time. However, the second layer did not bond to the first and thus fell apart. According to the independent engineer, some portions of the road will require full- depth reconstruction of the road. A Georgian government official stated that the lot 3A contractor had agreed to replace the base materials in some places and repave the entire lot. The construction supervisor’s most recent (December 2011) list of pavement defects indicated pothole patching and surface dressing was needed for lots 2, 3, 3A, 5i, and 5ii. However the independent engineer’s March 2012 trip report stated there were more extensive pavement defects that required correction measures such as pothole repair, surface dressing, crack repairs, or full-depth reconstruction for lots 1, 2, 3, 3A, 4, 5ii, 6ii, 6iii, and 7. The independent engineer also noted in several locations that the defects resulted from incorrectly repaired previous defects, the inadequate winter maintenance of the roads, and, in one section of road, traffic loads heavier than the road was designed to carry. The independent engineer also stated that if the roads are not correctly repaired, they will worsen. Defects not properly repaired will likely fail under increased traffic loads or further deteriorate, creating potholes as water enters the cracks in the winter and then freezes. On the basis of the independent engineer’s assessment that some of the project contractors were not meeting contract specifications, MCC sent a letter to MCA- Georgia in September 2010, noting that the work methods on lots 2, 3, 3A, and 4 were not to the standards expected and that the base material and pavement compaction required immediate improvement. The letter also stated that if the contractors did not improve the work, it would not be accepted. Structures such as drains and retaining walls are critical to a road’s longevity. A working drainage system helps to keep water off the road, which is critical to safety and to keep pavement from prematurely deteriorating. However, we found defects in the drainage systems of 7 of the 11 lots. For example: In lots 1, 5ii, 6i, 6ii, and 7, we found some of the drainage channels collapsing or cracked, which could cause the drains to become blocked (see fig. 12). In lots 2, 6i, and 6ii, we found the concrete drainage channels with defects because of poor concrete construction (see fig. 13). In lots 3, 5i, and 5ii, drains were installed above the water level in some places, making it impossible for water to drain off the road (see fig. 14). According to MCC’s independent engineer, the quality of the drain placement and the construction of the tops were inadequate because the concrete did not cure properly or had already started to harden before it was poured. If the drainage system does not work properly, the water will saturate and weaken the underlying ground and cause the road to deteriorate or freeze in the winter and create a safety hazard. Furthermore, we observed a failed retaining wall that, if left unrepaired, could damage the road. Because the retaining wall had failed, serious erosion had occurred, and if the erosion continues, it will progress until it reaches the road, jeopardizing the road’s future usefulness (see fig. 15). We also found additional erosion in lots 2, 6ii, and 7 (for example, see fig. 16). The independent engineer in his March 2012 report indicated a few erosion concerns for lots 2, 3, and 7 that needed to be corrected. The corrections are necessary for motorist safety and to protect the pavement, bridges, and retaining walls. Although the construction supervisor certified the contracted road work as substantially complete by the end of the compact, the previously described construction defects had not been repaired. Once the work was certified as substantially complete, responsibility for the roads moved from the contractors to MCA-Georgia, and final contract payments with MCC funds were made. However, according to the contracts, contractors continue to be responsible for the completion of any work or defects related to work quality during a 1-year defects liability period. The transfer of the road lots to MCA-Georgia included a list of about 700 defects identified by the construction supervisor to be completed or repaired after substantial completion and before the end of the defects liability period (see table 1). Additional defects can be added to the list by the construction supervisor if they appear in the 1-year liability period. MCC officials stated that it is desirable to complete as much of the work as possible before the defects liability period starts. However, it was necessary to accept the work as substantially complete before the end of the compact time frame so that final MCC funds disbursement could be made to MCA-Georgia. As a result, the work that remained and the repair of the remaining defects were moved into the defects liability period. Additional factors present challenges to ensuring the defective work is adequately repaired: MCC’s independent engineer reported in October 2011 that the repair work under way on lot 3A was not in accordance with standard procedures and that the road that had been patched was unsatisfactory. For example, the contractor did not apply sufficient bonding material to ensure that the layers of asphalt would adhere to each other before laying additional asphalt. In addition, the independent engineer commented that patch cutting and laying of asphalt were not properly done. We observed in December 2012 that much repair work remained to be done. In addition, during the summer construction season in 2011 no construction supervision firm was in place for about 2 months of August and September 2011. According to the October 2011 report of MCC’s independent engineer, this may have affected the contractors’ progress in rectifying construction defects. The gap in supervision occurred because the Georgian government contracted with a new construction supervisor for the defects liability period after the compact-funded construction supervisor’s contract ended. The new construction supervisor provided us with a summary of defects remaining at the end of 2011, but it was not in the same format as the original defects list. It was thus impossible to determine which defects had been corrected and which had been added. The Georgian government held performance guarantees from the contractors to ensure the work was completed. However, correction of the work could not be completed in the 1-year defects liability period, and the independent engineer reported in March 2012 that the performance guarantees for lots 1, 4, 6ii, and 6iii had expired before the lots were accepted as complete. The independent engineer also reported the performance guarantees for lots 2, 3, 3A, 5i, 5ii, 6i, and 7 were extended until August 2012, after the expected date that those lots will be accepted. Although several officials in Georgia stated that the repairs will be made, MCC has little ability to ensure the work will be done or done correctly. MCC has little oversight ability to ensure the work is completed now that the compact has ended. For example: MCC reported that documentation regarding the status of the projects in the defects liability period is held by the Georgian government. While the Georgian government provided MCC the project status for 3 of the 11 lots as of April 2012, the documentation provided was not in English. MCC’s independent engineer’s last trip to Georgia to review the status of the project was in March 2012, just before its contract expired. MCC will have little technical assistance in determining the extent to which all quality issues were addressed through the planned end of all of the defects liability periods in July 2012. MCC stated that it is considering other arrangements to support a site inspection in June 2012. All funds have been paid to the Georgian government for the project, and the conditions precedent for the compact are no longer in force, such as requiring a project management consultant and a construction supervisor to be in place. MCC officials stated that they therefore have no authority to ensure the road is repaired appropriately before the Georgian government takes final acceptance of the roads and releases the funds retained to the contractors. To sustain planned benefits such as reduced travel times and reduced user costs, Georgia will need to keep the road operational and maintain the pavement in good condition. Before signing the compact, MCC took several steps to ensure that Georgia would be able to sustain the planned benefits of the rehabilitated road. However, we found that regular maintenance requirements, snow removal operations, and limited funds will challenge the Georgian government’s ability to sustain MCC’s investment in the road. MCC took steps to ensure sustainability by including conditions precedent in the compact and by funding some equipment for road maintenance. MCC officials stated that the Georgia compact included a condition precedent requiring the Georgian government to maintain a certain level of funding to ensure proper maintenance of the road during the compact. MCC officials further stated that the condition precedent followed a similar requirement set in a World Bank loan agreement that had been entered into slightly earlier than the MCC compact, which emphasized ensuring that the government has the resources necessary to care for its national roads. MCC reported that, to sustain the economic opportunities generated by the road improvements, Georgia increased road maintenance funding from $33.6 million in 2006 to $56 million in 2010. In addition, MCC officials stated that the Roads Department of the Ministry of Regional Development and Infrastructure had been working with the World Bank to develop the institutional framework and technical capacity to provide good road maintenance. MCC officials also stated that they allowed MCA-Georgia to use funds left over at compact end to purchase some equipment (such as an excavator and a road-patching vehicle) that would help equip the Georgian road department to perform maintenance. Finally, MCC officials stated that Georgia, similar to other developing countries, will face difficult decisions in how it spends its money. They noted that the Georgian government has a preference for constructing new roads instead of maintaining old ones; however, they believed that Georgia would maintain the new road as a source of national pride and hoped that, by working with MCC and other infrastructure development partners such as the World Bank, the government had come to realize the value of maintaining its investments. During our visit to the road, we found that many maintenance items not covered under the contractors’ defects liability period were not being done. For example, we found several lots where the pavement markings were worn and needed to be repainted, guardrails and concrete barrier walls had been damaged and not repaired, drainage systems needed repair, and erosion was filling the drainage system and had not been cleaned (see fig. 17). The MCC independent engineer also noted in June 2011, October 2011, and again in March 2012 that routine maintenance seemed to be lacking on some lots, including cleaning drainage channels and culverts, repairing damaged guardrails, and repairing erosion spots, and other miscellaneous damages. A Georgian official stated that there were maintenance contracts in place to make these kinds of repairs, and the MCC independent engineer stated in March 2012, that it did find Georgian road department contractors performing some routine maintenance tasks, but additional efforts were needed, and that a lack of snow removal in the 2011-2012 winter had likely resulted in increased road deterioration. The feasibility study for the road project noted that portions of lots 3, 4, 5ii, 6i, and 6ii are prone to snow drifts and part of lot 4 is sometimes closed from October to March. Snow removal is part of keeping the road operational. In its absence, the road is closed to traffic and the planned benefits of the improved road such as reduced user costs, reduced travel time to Tbilisi, and economic benefits from increased trade will not be fully realized. In the curved areas of the road prone to icing, we saw small piles of salt and sand; however, we did not see snow removal equipment or any winter maintenance operations, even though the Georgian government officials stated they had maintenance contracts in place to provide snow removal operations. The project’s feasibility study had recommended installing snow fences to minimize snow drifts on the road, but on the basis of environmental concerns, MCA-Georgia chose to plant trees (living snow fences) to stop the snow drifting across the road and reduce maintenance costs. However, many of these trees did not survive. Living snow fences take several years to provide effective snow control. During our December 2011 field work, we found that the road was closed in lot 4 for the 2 days we were on-site because of snow drifts on one short (about a quarter of a kilometer) portion of lot 4. The project included electronic message signs on the road leaving Tbilisi to allow motorists to choose an alternate route. In addition, we found other portions of lot 4 to have only one lane open to traffic (see fig. 18). The independent engineer’s March 2012 trip report stated that the Georgian road department’s snow removal operations had been lacking during the 2011- 2012 winter season, which had heavy and repeated snow storms. The engineers stated the road had been closed in lot 4 from December 2011 until they visited in March 2012 and they found other sections of the road with only narrow lanes open, resulting in traffic jams and minor accidents. The Georgian government may not have a sufficient maintenance budget to maintain and operate the road. A Georgian government official stated that it had about $63 million in its 2012 budget to maintain roads. This appears to be a decrease from previous years’ road maintenance budgets ($81 million in 2011 and $90 million in 2010). However, this is still an increase over the $56 million MCC reported that it had budgeted in 2010 to fulfill a condition precedent in the compact. Furthermore, of the overall 2012 road maintenance budget, the government had budgeted about $720,000 for maintaining the MCC road specifically. However, this amount of funding may be insufficient because MCA-Georgia approved paying contractors almost $700,000 to provide winter snow removal for only part of the road (lots 2, 3, 3A, and 4) during the 2010-2011 winter. The road construction defects discussed above may increase maintenance costs, decrease the life span of the project, and result in reduced benefits from the project. Even if the road defects are adequately repaired, they could increase the cost of maintenance because of the need to seal cracks at the edges of pavement patches and reseal road surface treatments periodically to provide protection to the pavement. If not adequately repaired, the roads will need ongoing maintenance to keep them in such condition that they can provide benefits to the citizens of Georgia. In Benin, MCC constructed several infrastructure improvements to the Port of Cotonou, including a jetty, a wharf, internal port roads, a railway, and security and electricity distribution systems. The project was intended to increase the efficient transport and volume of goods flowing through the port. At project completion, the quality of construction generally met established quality standards. However, several of the port’s critical components were inoperable at the end of the compact, including the new south wharf, the port security system, and the electricity distribution system. The government of Benin’s inability to supply the resources, manpower, or policies needed to operate all of the port’s components calls into question whether the port project will meet expected compact results or be sustainable for the life of the infrastructure. In February 2006, MCC signed a compact with Benin, providing $307 million for four projects—Access to Land, Financial Services, Justice, and Markets—to improve physical and institutional infrastructure and increase private sector activity and investment. The Access to Markets activity, which accounted for just over $169 million, or 55 percent of the total compact funding, went to improve the Port of Cotonou’s infrastructure, specifically to increase efficiency and the volume of goods flowing through the port. By the time the compact ended, the final cost of the infrastructure improvements increased to about $188 million, accounting for over 60 percent of the final compact amount (see fig. 19). The compact entered into force in October 2006 and ended in October 2011. The project components were awarded to construction contractors in a series of lots, as follows (see fig. 20): lot 1: jetty to slow the rate at which sand will fill the access channel; lot 2: south wharf, a new wharf intended to increase volume of goods; lot 3: east-west road, security, electricity distribution system, fire protection, and lighting; and lot 3A: bypass road, railway, boundary wall, truck parking lot, and lighting. In addition, the MCC funds allowed the port to purchase oceanographic equipment, antipollution equipment, and a tugboat. While the project was largely completed as planned, components for a proposed Lot 4—which included a storage facility for dry bulk goods such as grains and sand and a fish quality inspection station—were deemed not viable once MCA-Benin conducted its feasibility studies. As a result, MCA-Benin did not tender a bid for that lot. According to MCC officials, the funds originally planned for those items were shifted to the other infrastructure components. The funds also helped cover cost increases and additional work on the wharf such as increasing wall length and dredging the berth. MCC’s two primary challenges in completing the port project were overcoming a late start to the construction, and managing the underperformance of one contractor. Despite these challenges, most of the infrastructure components had no or only minor quality issues. However, one lot had over 500 uncompleted tasks or defects when the compact closed. MCA-Benin did not sign contracts with its construction contractors until almost 3 years after the compact entered into force. During those years, MCA-Benin studied what components of the port project would be feasible. As a result, construction contractors had just 2 years to complete their work (see fig. 21). In addition, MCA-Benin had to manage the poor performance of the contractor for lot 3 to get the project finished within the compact’s time frame. According to MCC, the lot 3 contractor experienced internal management problems, missed important contract deadlines, did not perform contracted work, and provided inconsistent information to MCA- Benin and MCC. In August 2010, MCA-Benin terminated components of the contract, including the bypass road, railway, parking lot, and boundary wall. MCA-Benin awarded the terminated components as lot 3A to the contractor for the jetty that had been successful in meeting the time frames for the jetty component. After several extensions of time, the lot 3 contractor’s remaining work was certified as substantially complete, months after the originally anticipated contract completion date and 1 day after the end of the compact. However, several components were left to be finished or corrected during the defects liability period. We found some of the project components to be completed and functioning for their intended use with only minor repairs needed during the defects liability period. For example, the lot 1 jetty was installed and reducing the amount of sand coming into the port from the ocean, thereby reducing periodic dredging maintenance costs (see fig. 22). We did observe that some areas of the jetty’s concrete surface had small areas of cracking, which will require future maintenance. The construction supervisor reported that some project components needed to be repaired or completed upon substantial completion of the contracts. However, because the construction supervisor deemed these issues minor, he was able to certify each contract as substantially complete and allow each contractor to make repairs or finish the work during the 1-year defects liability period following the substantial completion of the contract. MCC officials stated that the use of the defects liability period to complete work was not an ideal situation, but it was appropriate for situations in which the contractor needed to rely on an outside entity to complete the work (such as for the electricity distribution system) and in cases where the construction supervisor deemed the work to be minor. We observed some of these items during our field work and also concluded that they were generally minor in nature; however, lot 3 had over 500 items to be completed or corrected in the defects liability period (see table 2). As of April 2012, the project management consultant provided documentation that only 20 items remained to be completed and 5 items were listed as outstanding defects in lot 3. The consultant reported no uncompleted work or outstanding defects reported to be remaining from the takeover date on lots 1, 2, or 3A. For lot 3, the construction supervisor reported that minor items needed to be completed or corrected included connecting the fire pump to the power supply, completing some paved areas, and installing a truck weigh station. In lot 3A, we also observed uncompleted work—such as lighting poles had not been completed in the 250-truck parking lot—and some minor defects needing repair—such as missing manhole covers in the road and a leaking pipe connecting the water tank to the fire control system (see fig. 23). Although MCC took steps to ensure the port project’s sustainability, many of the project’s key components—the south wharf, the security system, the electricity distribution system, and the fire station—were either not operational or only partially operational at the end of the compact. The south wharf had additional work remaining to be completed by the Port Authority and the concessionaire hired to operate the wharf for it to be functional, and the Port Authority had not ensured that all other necessary infrastructure and staffing were in place for the security system, the electricity distribution system, and the fire station. MCC took several steps to ensure that the government of Benin could sustain the operations and maintenance of the project components. These steps included conducting a feasibility study, incorporating conditions precedent into the compact, hiring a port advisor, requiring a compact closure plan, and identifying steps the government of Benin should take to support sustainability in the compact letter of completion. Feasibility study: In accordance with its policies, MCC funded a study to determine the technical and financial feasibility of the port activities proposed by the government of Benin. Through this process, MCC identified activities that would provide an economic benefit to Benin and ensure the likelihood for future sustainability. Conditions precedent: Two conditions precedent were included in the compact to help ensure the sustainability of the port. First, the compact required the Port Authority to enter into a contract with a private firm to operate the new south wharf to ensure its open and transparent operation, eliminate corruption, and improve operations. Second, MCC required the Port of Cotonou to meet the International Meeting the code means that Ship and Port Facility Security code.that ships stopping later at U.S. ports would not be required to undergo increased security scrutiny, thus decreasing costs to shipping companies. Port advisor: MCC funded the hiring of a port advisor to review port operations and make recommendations to improve the operations of the port and ensure an adequate cash flow through increased shipping fees to operate the port. Compact closure plan: MCC requires all MCAs to create a Program Closure Plan, outlining the steps it will take when the compact ends to finalize any compact commitments in an orderly fashion. The Program Closure Plan for Benin includes some steps aimed at helping sustain the compact’s investment, including for the Port of Cotonou. Most notably, the compact closure plan describes the government of Benin’s intention to establish an agency, in part, to complete and implement a “MCA-Benin experience sustainability program.” Compact completion letter: MCC also sent a letter to the government of Benin in January 2012 to formally mark the conclusion of the compact and to provide final recommendations to ensure the sustainability of compact investments, among other things. In addition, MCC noted that efforts made by Benin to maximize the results and ensure the sustainability of this compact would be considered in decisions related to a potential second compact. The letter specifically identified the following as actions the government of Benin needs to complete: ensure the competitiveness of the Port of Cotonou and increase its throughput (including ensuring the fluidity of traffic through the port, implementing a suitable operations scheme for the truck parking facility, and controlling total fees charged to importers); complete customs department reforms; enforce port security systems (including control of truck and pedestrian access traffic); and execute the port channel access improvements required to meet the terms of the south wharf concession agreement and to achieve the intended increase in port capacity. Despite the steps MCC took to help ensure sustainability, several key port components were not operational at the end of the compact because the Port Authority had not taken the necessary steps to operate all project components (see table 3). The Port Authority’s inability to operate all components of the port at compact completion calls into question its ability to maintain port operations and to achieve MCC’s anticipated economic return. Although MCC funded a new south wharf to increase tonnage moving through the port, and ensured that the Port Authority contracted a concessionaire to operate the south wharf, to increase private investment and generate income for the Port Authority, it is not in operation because the Port Authority has not completed additional dredging and the concessionaire has not finished the landside works, such as paving the wharf area or installing cranes (see fig. 24). In exchange for the right to manage and operate the south wharf, the concessionaire paid the Port Authority a concession fee. The Port Authority intended to use that fee to deepen the access channel—its contractual obligation under the agreement—so that larger cargo ships could use the concessionaire’s facility. The concessionaire also agreed to construct or install the necessary landside equipment and infrastructure. However, because of an error in assessing the amount of work to be done and an underestimation of the cost, the concession fee was insufficient to fund the required dredging. A Port Authority official stated in April 2012 the Port Authority is currently evaluating bids for the work. The concession agreement also stipulated that the concession should be operational 18 months after the concession start date. However, according to a concessionaire official, the concession company had not yet agreed upon a start date with the Port Authority as of April 2012. However, the concessionaire was proceeding with constructing its portion of the landside works. MCC officials stated that the concessionaire’s construction will be completed in December 2012, and a concessionaire official told us the wharf would be operational in January 2013; however, if the Port Authority does not honor its part of the concession agreement and finish the dredging, a concessionaire official stated it may reduce the amount of landside works it is going to finish, such as not install as many cranes or not pave the entire wharf area, because it could operate the south wharf only for smaller ships. Smaller ships and reduced wharf area would likely reduce the amount of cargo tonnage through the port and the fees the port would receive from the concession. As of April 2012, the Port Authority had requested the International Finance Corporation’s assistance investigating how it could fund the required dredging and meet its commitment of the concession agreement, but it had not yet awarded any contracts to perform the work. The Port of Cotonou may be unable to provide effective security for the port or retain its International Ship and Port Facility Security certification because the MCC-funded port security system was not in operation as of our December 2011 visit.staff, enforced its security policies, or maintained its security perimeter. The Port Authority had not hired sufficient The security system requires 150 to 257 individuals to staff the operations on a 24-hour basis. security personnel on staff and, according to a port authority official, the port authority was recruiting an additional 25 people. The construction supervisor stated that as of April 2012 the Port Authority had about 10 individuals staffing only the operation of two control and surveillance centers, which, according to MCC officials, were being operated only during the day. Without adequate trained staff, the security system cannot function to its full capacity. As of April 2012, there were about 90 The Port Authority was not fully enforcing established security policies, as of December 2011. For example, the port was not fully controlling access to the port. Even though a Port Authority official stated that everyone entering the port is required to wear some form of identification (a badge, arm bracelet, or uniform), in our December 2011 site visit we observed several visitors with improper or no identification at all. According to a Port Authority official, the MCC- funded access system was still not in full operation as of April 2012. In December 2011, we also observed a railway access gate had not been installed on the south wharf site, creating a breach in the boundary wall allowing people to bypass the security system and gain entry into the port. The gate was not a part of the MCC-funded work, and while the Port Authority is working with the wharf concessionaire to install a gate at the location, the temporary measure implemented by the Port Authority leaves the port vulnerable (see fig. 25). The port authority estimated 150 security personnel are required to operate the new security system, while the contractor installing the system calculated about 192 security personnel are required to operate the system 24 hours a day, 7 days a week, and the project management consultant for the project calculated 257 security personnel were required to adequately secure the port 24 hours per day. MCC funded an electricity distribution system for the port that does not function to capacity because the Port Authority had not ensured that the amount of power transmitted to the port from the power company would be adequate. The MCC-funded distribution system was designed to distribute 15 megavolt-amps of electricity and planned to initially provide 10 megavolt-amps; however, the current conduit to the port provides only 2 megavolt-amps. Until the 10 megavolt-amps of power is provided, the contractor cannot make final connections or test the system. As of April 2012, a project management official reported that the increased power service had not yet been provided to the system and the officials did not expect it to be provided before the October 2012 end of the defects liability period. That same month MCC officials told us that they did not expect the power service to be upgraded before January 2013. The MCC-funded fire station was not in use as of December 2011 because the Port Authority had not hired sufficient staff. The Port Authority received three new fire engines as part of the MCC project, but had not increased the staffing level, which MCA-Benin’s feasibility study stated was necessary to operate the fire engines on a 24-hour basis. In addition, truck congestion on the roads within the port prevents the fire engines from circulating when needed. As of April 2012, port authority officials stated that the fire protection system was still not in service because of construction defects in the water tank valve and testing had not been completed. However, the port officials stated that they had recruited and trained additional firemen, and were recruiting other personnel to operate and maintain the system, such as inspectors, a diesel mechanic, and plumbers. At the time of the feasibility study, the Port Authority had 14 fire prevention staff. In April 2012, a Port Authority official stated that the authority plans for a total staff of 25. Although the east-west port road and the bypass road were completed with only minor quality issues such as missing manhole covers, significant truck congestion jeopardizes their utility. The compact goal was to reduce the average number of hours trucks stayed at the port from 24 hours to 7 hours. However, the average after the compact ended is 28 hours. According to officials from two shipping companies, one of their primary concerns was that truck congestion at the port would likely limit their ability to increase the volume of merchandise passing through the port (see fig. 26). The Port Authority has taken some steps to alleviate the truck congestion. For example, the Benin government engaged a private firm to install tracking and communication devices in trucks beginning in December 2011 that would allow trucks to enter the port when their shipper is ready to load or unload them. However, an official from the firm stated that, as of April 2012, the Benin government had not allowed the firm to initiate the system even though it has been ready to operate since late November 2011. The government of Benin also has plans to move some operations to off-site “dry ports” where containers will go through customs and be loaded and emptied. However, shipping company officials questioned whether the existing railway will be able to transfer the cargo to the dry ports. According to March 2012 statements by shipping company officials, the Port Authority attempted to implement the use of the railway and a privately operated dry port for containers going to hinterland countries at a site about 55 kilometers away from the port. However, according to one of the shipping company officials, the railway could transport only 90 of the 200 containers needed to be transported daily to the dry port. One shipping company official stated the company had over 1,700 containers backlogged in the port in early March 2012, taking an average of more than 25 days to get a container from the port to the dry port. Shipping officials reported that as of late March 2012, port officials have allowed the containers to be either loaded and emptied in the port or transported by truck to another dry port, but congestion was still a problem. MCC funded the construction of a 250-truck parking lot that was handed over to the Port Authority in September 2011; however, as of April 2012, Port Authority officials stated the parking lot was not in full use because they had not engaged a company to manage the concession. MCC officials stated that trucks are occasionally moved to the lot to alleviate congestion. Port Authority officials stated that they plan to sign a concession agreement In July 2012 to manage the lot. Some of the challenges MCC faced in Georgia and Benin were not unique. As with other early compacts, insufficient planning, escalation of construction costs, and insufficient MCC review led to project delays, scope changes, and cost increases. In the case of Georgia, MCC specifically had problems ensuring the quality of its transportation infrastructure project even though it had a quality assurance framework because it did not adequately address problems in contract supervision identified by the independent engineer. As a result, the road had significant pavement defects and numerous quality issues at compact completion. Furthermore, MCC has no leverage over the government or contractors once compacts end, even though contractors may be expected to continue work in the 1-year defects liability period following the contract. In Georgia, the construction contractors were required to remediate quality issues after the end of the contract, but MCC cannot at this point ensure that the repair work is properly done. Even though MCC took steps to provide for the sustainability of its investments in both Georgia and Benin, the projects in both countries have maintenance and operability challenges that jeopardize the benefits they were projected to achieve. In Georgia, the ability of the government to maintain the road is in question. Without sustained maintenance—such as repairing drainage systems and removing snow—the road will need additional repairs and have limited usefulness in the winter. In Benin, key project components, including security and electricity distribution systems and the south wharf, were not operational at the end of the compact. The operation of these and other interconnected systems depends on the partner government, which to date has been unable to fund and implement the work required to begin port operations. As a result, MCC may have invested considerable U.S. resources in equipment and structures that will not be used to maximum benefit and thus not provide the expected economic benefits. MCC should take this opportunity to review the problems that emerged from Georgia, Benin, and other completed compacts and to establish or strengthen mechanisms by which it can better invest U.S. resources in future compacts. To maximize the quality and sustainability of future projects, we recommend that the MCC Chief Executive Officer take the following actions: To ensure that its quality assurance framework is fully implemented and to ensure that transportation infrastructure projects are built to the established quality standards, MCC should review how MCC uses information and professional recommendations provided by its independent engineers to address identified deficiencies and to ensure projects are constructed to the quality standards set out in contracts, and develop a mechanism to maintain influence through contracts’ defects liability periods when they extend beyond the compact end date. To ensure sustainability of compact projects, MCC should evaluate the effectiveness of the tools it uses (such as its feasibility studies and conditions precedent) to ensure that partner countries have adequate infrastructure, staff, and policies necessary to operate and maintain MCC- funded infrastructure following the compact. In written comments on a draft of this report, MCC stated that it agrees with our three recommendations; however, it did not commit to undertaking any specific actions to address them. With respect to the first recommendation to review its use of information and recommendations from its independent engineers, MCC stated that it does guide its oversight of compact projects using the analysis provided by its independent engineers, but that the quality of advice can vary and independent engineers are not always privy to all factors affecting compact programs. However, MCC did not state that it would review its practices regarding how it uses information from these independent engineers. With respect to our second recommendation regarding the need for a mechanism to maintain influence on contracts whose defects liability periods extend beyond compact end dates, MCC noted that it sustains a dialogue with its partner countries after compact closure to emphasize the importance of continued oversight. However, as MCC officials have noted, because its authorizing legislation limits the term of compacts to 5 years, MCC’s ability to assist partner countries directly once a compact closes is restricted. MCC did not state that it would seek any additional authority to maintain influence after the end of a compact. With respect to our third recommendation, to evaluate the tools it uses to ensure projects’ sustainability, MCC listed ways that it works to ensure sustainability throughout the development, implementation, and closure of compact programs. MCC also noted that it revised its Compact Development Guidelines in January 2012 and included steps to strengthen the agency’s assessment of sustainability during compact development. However, MCC did not commit to evaluate the effectiveness of the tools it has used or plans to use to ensure its projects’ sustainability. We have reprinted MCC’s comments in appendix III. We have also incorporated technical comments from MCC in our report where appropriate. We are sending copies of this report to interested congressional committees and the Millennium Challenge Corporation. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. The fiscal year 2008 Consolidated Appropriations Act, Public Law 110- 161, mandated that GAO review the results of Millennium Challenge Corporation’s (MCC) compacts. For the purpose of this engagement, we examined the quality and sustainability of MCC’s two transportation infrastructure projects in Georgia (the Samtskhe-Javakheti Road Rehabilitation Activity) and Benin (the Port of Cotonou Access to Markets Project). Transportation infrastructure is defined as public works that provide the conveyance of passengers or goods from one place to another. GAO selected MCC’s compacts in Georgia and Benin as the focus of this engagement through a subjective process. Our selection universe was those compacts ending in 2010 and 2011 that had a transportation infrastructure project. In a previous engagement we had reviewed the MCC-funded transportation infrastructure projects in Cape Verde and Honduras. Georgia and Benin, in combination with these previously reviewed projects, provided some geographic variety as well as the ability to compare two port projects and two large road projects. We based our assessment of the projects’ quality on the quality assurance requirements established by MCC, the partner countries’ Millennium Challenge Accounts (MCA), and their contractors. MCC requires the MCAs to (1) have an individual project director or to engage the services of a project management firm to help manage the administrative aspects of compact programs, (2) contract with implementing partners (such as construction firms) using MCC’s procurement guidelines, and (3) engage a construction supervisor to oversee the day-to-day construction and ensure compliance with contract requirements. For this report, the definition of sustainability is based on the definition from the Organisation for Economic Cooperation and Development’s Development Assistance Committee, which defines “sustainability” as “the continuation of benefits from a development intervention after major development assistance has been completed.” The Organisation for Economic Cooperation and Development’s Development Assistance Committee is an international forum of many of the largest funders of aid with a mandate to promote development cooperation and other policies so as to contribute to sustainable development. We operationalized this definition by specifying that sustainability is the ability of MCC’s partner country governments to operate and maintain the new infrastructure in such a condition as is required to produce the projected benefits for the period of time those benefits are calculated. To assess the quality and longer-term sustainability for compacts in Georgia and Benin, we analyzed MCC, MCA, and other documents; interviewed MCC officials and stakeholders; and observed project results in both countries. We reviewed the compact agreement for Georgia and Benin. We also reviewed documents prepared by MCA officials, independent construction supervisors, project management consultants, MCC independent engineers, and government officials, including monthly reports, special studies, testing reports, and daily inspections. We also reviewed final reports submitted to MCA by contractors on compact activities. We interviewed MCC and MCA officials in both countries regarding the results of each compact activity, including the quality and sustainability of the projects. We visited infrastructure projects in both countries, including visits to the port in Benin, and to the Samtskhe- Javakheti road in Georgia. We met with project construction contractors, independent construction supervisors, and MCA project management consultants. In addition, we interviewed officials from the governments of Georgia and Benin about compact implementation, results, and sustainability, including Benin’s Ministry of Maritime Economy and Port Authority, and Georgia’s Ministry of Infrastructure. We traveled to Benin and the Republic of Georgia in December 2011 and conducted site inspections to verify the extent to which the MCC- funded transportation infrastructure projects had been completed and to observe whether there were any visible deficiencies in construction. All photographs in this report attributed to GAO were taken during this time period. These interviews, document reviews, and site visits were used to determine if the MCAs had implemented MCC’s quality assurance framework, if there was supporting documentation to verify that quality testing had been undertaken, if any quality deficiencies were encountered during construction, if any quality deficiencies remain, and whether the infrastructure projects would be sustainable. We were not able to view actual work in progress or visit testing facilities for most infrastructure contracts because the work had already been completed. To determine the amount of funding used for transportation infrastructure projects, we reviewed MCC financial data. We included compact implementation funding—funds disbursed before entry into force to facilitate the implementation of the compact—with other projects not related to transportation infrastructure. MCC enters into a legal relationship with partner country governments, which vest responsibility for day-to-day management of compact project implementation in the MCA, including monitoring and evaluation activities such as setting and revising targets, but such MCA actions require MCC’s direct oversight and approval. Therefore, throughout this report, we attribute all decisions related to project rescoping and compact targets to MCC. Finally, some of the reports and documents referenced above were written in French or Georgian. We translated these documents internally to enable our analysis. We conducted this performance audit from November 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Emil Friberg Jr. (Assistant Director), Michael Armes (Assistant Director), Leslie Locke, and Miriam Carroll Fenton made key contributions to this report. Additional technical assistance was provided by John Bauckman, Lynn Cothern, George Depaoli, David Dornisch, Aryn Ehlow, Etana Finkler, Heather Hampton, Ernie Jackson, and Jena Sinkfield. Millennium Challenge Corporation: Compacts in Cape Verde and Honduras Achieved Reduced Targets. GAO-11-728. Washington, D.C.: July 25, 2011. Millennium Challenge Corporation: Summary Fact Sheet for 17 Compacts. GAO-10-797R. Washington, D.C.: July 14, 2010. Millennium Challenge Corporation: MCC Has Addressed a Number of Implementation Challenges, but Needs to Improve Financial Controls and Infrastructure Planning. GAO-10-52. Washington, D.C.: November 6, 2009. Millennium Challenge Corporation: Independent Reviews and Consistent Approaches Will Strengthen Projections of Program Impact. GAO-08-730. Washington, D.C.: June 17, 2008. Management Letter: Recommendations for Improvements to MCC’s Internal Controls and Policies on Premium Class Air Travel. GAO-08-468R. Washington, D.C.: February 29, 2008. Millennium Challenge Corporation: Projected Impact of Vanuatu Compact Is Overstated. GAO-07-1122T. Washington, D.C.: July 26, 2007. Millennium Challenge Corporation: Vanuatu Compact Overstates Projected Program Impact. GAO-07-909. Washington, D.C.: July 11, 2007. Millennium Challenge Corporation: Progress and Challenges with Compacts in Africa. GAO-07-1049T. Washington, D.C.: June 28, 2007. Millennium Challenge Corporation: Compact Implementation Structures Are Being Established; Framework for Measuring Results Needs Improvement. GAO-06-805. Washington, D.C.: July 28, 2006. Analysis of Future Millennium Challenge Corporation Obligations. GAO-06-466R. Washington, D.C.: February 21, 2006. Millennium Challenge Corporation: Progress Made on Key Challenges in First Year of Operations. GAO-05-625T. Washington, D.C.: April 27, 2005. Millennium Challenge Corporation: Progress Made on Key Challenges in First Year of Operations. GAO-05-455T. Washington, D.C.: April 26, 2005.
MCC was established in 2004 to help developing countries reduce poverty and stimulate economic growth through multiyear compact agreements. As of June 2012, MCC had signed 26 compacts totaling about $9.3 billion in assistance. Seven compacts, including those with Georgia and Benin, closed in 2010 or 2011. Most had a transportation infrastructure project (a road or a port) that received about 50 percent of the compact’s total funding. This report, prepared in response to a congressional mandate to review compact results, examines how MCC ensured the quality and sustainability of MCC’s two transportation infrastructure projects in Georgia and Benin. GAO analyzed MCC documents, interviewed MCC officials and stakeholders, and observed the transportation infrastructure projects in those countries. In Georgia, quality and sustainability issues jeopardize the long-term usefulness of the Samtskhe-Javakheti road project. The Millennium Challenge Corporation (MCC) funded the rehabilitation of about 217 kilometers of road linking the previously isolated Samtskhe-Javakheti region with Tbilisi, the country’s capital, and reducing the driving time from 8 ¼ hours to 2 ¾ hours. The project was intended to increase exports from the region, integrate people in the region with the rest of Georgia, and expand trade with Turkey and Armenia. However, the urgency to meet fixed time frames resulted in problems implementing the project’s quality assurance framework. For example, the construction supervisor did not have enough staff to properly monitor construction and ensure quality. Despite several recommendations from MCC’s independent engineer, MCC and its Georgian counterpart, the Millennium Challenge Account (MCA-Georgia), did not adequately increase the number of construction supervisors, which resulted in pavement defects in parts of 5 of the 11 road sections and deterioration of structures such as drainage and retaining walls. One 15-kilometer section contained enough defects that the road had to be completely repaved. Furthermore, much of the repair work was to be done in the contracts’ 1-year defects liability period, after the compact closed and at a time when MCC no longer had oversight authority. Although MCC took steps to ensure the road project’s sustainability, the Georgian government has demonstrated limited ability to keep the road operational and maintained. In Benin, construction for the Port of Cotonou project generally met established quality standards, but several components were not in operation at the compact’s end. MCC funded the construction of several port infrastructure improvements, including a jetty, a wharf, internal port roads, a railway, and security and electricity distribution systems. The project was intended to increase the efficient transport and volume of goods flowing through the port. However, several components—including the new south wharf, the port security system, and the electricity distribution system—were not in operation at compact completion because the Port Authority had not ensured that the necessary infrastructure, staffing, or policies were in place to operate them. For example, the new south wharf, which was intended to increase the cargo tonnage moving through the port, is not in operation in part because the Port Authority does not have the funds to complete the dredging needed to allow large vessels to access the new wharf. Even though MCC took steps to ensure that the government of Benin could sustain the operations and maintenance of the project—such as conducting a feasibility study, incorporating conditions precedent into the compact, hiring a port advisor, requiring a compact closure plan, and identifying steps the government of Benin should take to support sustainability in the compact letter of completion—they were not sufficient. As a result, Benin’s inability to supply the resources, manpower, or policies needed to operate all of the port’s components calls into question whether the port project will achieve expected compact results or be sustained throughout the life of the infrastructure. To ensure that compact projects are implemented to established quality standards, GAO recommends that MCC (1) review how it uses information from its independent engineers, and (2) develop a mechanism to maintain influence on contractor repairs after compact closure. To ensure sustainability of compact projects, GAO recommends that MCC evaluate the tools it uses to ensure that partner countries have adequate resources to operate and maintain MCC-funded infrastructure. MCC agreed with all three recommendations but did not commit to taking any actions to address them.
Global exports of defense equipment have decreased significantly since the end of the Cold War in the late 1980s. Major arms producing countries, such as the United States and those in Western Europe, have reduced their procurement of defense equipment by about one-quarter of 1986 levels based on constant dollars. Overall, European nations have decreased their defense research and development spending over the last 3 years, which is one-third of the relatively stable U.S. research and development funding. Defense exports have declined over 70 percent between 1987 and 1994. In response to decreased demand in the U.S. defense market, U.S. defense companies have consolidated, merged with other companies, or sold off their less profitable divisions, and they are seeking sales in international markets to make up lost revenue. These companies often compete with European defense companies for sales in Europe and in other parts of the world. The U.S. government, led by DOD, has maintained bilateral trade agreements with 21 of its allies, including most European countries, to address barriers to defense trade and international cooperation. No multilateral agreement exists on defense trade issues. Bilateral agreements have been established to provide a framework for discussions about opening defense markets with those countries as a way of improving the interoperability and standardization of equipment among North Atlantic Treaty Organization (NATO) allies. The United States has enjoyed a favorable balance of defense trade, which is still an issue of contention with some of the major arms producing countries in Europe. This trade imbalance was cited in a 1990 U.S. government study as a justification for European governments requiring defense offsets. However, because European investment in defense research and development is significantly below U.S. levels, a Department of Commerce official stated that European industry is at a competitive disadvantage in meeting future military performance requirements. Reciprocal trade agreements recognize the need to develop and maintain an advanced technological capability for NATO and enhance equipment cooperation among the individual European member nations. A senior NATO official stated that Europe’s ability to develop an independent security capability within NATO and meet its fair share of alliance obligations is contingent on its ability to consolidate its defense industrial base. This official indicated that if such a consolidation does not occur, then European governments may be less willing to meet their NATO obligations. European governments have made slow gradual progress in developing and implementing unified armament initiatives. These initiatives are slow to evolve because the individual European nations often have conflicting goals and views on implementing procedures and a reluctance to yield national sovereignty. In addition, the various European defense organizations do not include all of the same member countries, making it difficult to establish a pan-European armament policy. European officials see the formation of a more unified European defense market as crucial to the survival of their defense industries as well as their ability to maintain an independent foreign and security policy. Individual national markets are seen as too small to support an efficient industry, particularly in light of declining defense budgets. At the same time, mergers and consolidations of U.S. defense companies are generating concern about the long-term competitiveness of a smaller, fragmented European defense industry. In the past, European governments made several attempts to integrate the European defense market using a variety of organizations. The Western European Union (WEU), the European Union, and NATO are among the institutions composed of different member nations that have addressed European armament policy issues (see fig. 1). For example, in 1976, the defense ministers of the European NATO nations established the Independent European Program Group as a forum for armament cooperation. This group operated without a legal charter, and its decisions were not binding among the member nations. In 1992, the European defense ministers decided that the group’s functions should be transferred to WEU, and the Western European Armaments Group was later created as the forum within WEU for armament cooperation. In 1991, WEU called for an examination of opportunities to enhance armament cooperation with the goal of creating a European armaments agency. WEU declared that it would develop as the defense component of the European Union and would formulate a common European defense policy. It also agreed to strengthen the European pillar within NATO. Under WEU, the Western European Armaments Group studied development of an armaments agency that would undertake procurement on behalf of member nations, but agreement could not be reached on the procurement procedures such an agency would follow. Appendix I is a chronology of key events associated with the development of an integrated European defense market. In 1996, two new armament agencies were formed. OCCAR was created as a joint management organization for France, Germany, Italy, and the United Kingdom, and the Western European Armaments Organization (WEAO) was created as a subsidiary body of WEU. As shown in table 1, the two agencies are separate entities with different functions. OCCAR was created as a result of French and German dissatisfaction with the lack of progress WEU was making in establishing a European armaments agency. Joined by Italy and the United Kingdom, the four nations agreed on November 12, 1996, to form OCCAR as a management organization for joint programs involving two or more member nations. OCCAR’s goals are to create greater efficiency in program management and facilitate emergence of a more unified market. Although press accounts raised concerns that OCCAR member countries would give preference to European products, no such preference was included in OCCAR’s procurement principles. Instead, it was agreed that an OCCAR member would give preference to procuring equipment that it helped to develop. In establishing OCCAR, the Defense ministers of the member countries agreed that OCCAR was to have a competitive procurement policy. Competition is to be open to all 13 member countries of the Western European Armaments Group. Other countries, including the United States, will be invited to compete when OCCAR program participants unanimously agree to open competitions to these countries based on reciprocity. OCCAR officials have indicated that procedures for implementing the competition policy, including criteria for evaluating reciprocity, have not yet been defined. According to some U.S. government and industry officials, issues to consider will include whether U.S. companies will be excluded from OCCAR procurement or whether OCCAR procurement policy will be consistent with the reciprocal trade agreements between member countries and the United States. OCCAR’s impact on the European defense market will largely depend on the number of programs that it manages. OCCAR members are discussing integrating additional programs in the future but are expected to only administer joint programs involving participating nations, thereby excluding transatlantic, NATO, or European cooperative programs involving non-OCCAR nations. Some European nations, such as France and Germany, are committed to undertaking new programs on a cooperative basis. While intra-European cooperation is not new, French Ministry of Defense officials have indicated that this represents a change for France because they no longer intend to develop a wide range of weapon programs on their own. On November 19, 1996, a week after OCCAR was created, the WEU Ministerial Council established WEAO to improve coordination of collaborative defense research projects by creating a single contracting entity. As a WEU subsidiary body, WEAO has legal authority to administer contracts, unlike OCCAR, which operates without a legal charter and has no authority to sign contracts for the programs it is to administer. WEAO’s initial task is to manage the Western European Armaments Group’s research and technology activities, while OCCAR is to manage the development and procurement of weapon systems. The WEAO executive body has responsibility for soliciting and evaluating bids and awarding contracts for common research activities. This single contracting entity eliminated the need to administer contracts through the different national contracting authorities. According to WEAO documentation, the organization was intentionally designed to allow it to evolve into a European armaments agency. However, it may take several years before the effect of OCCAR and WEAO procurement policies can be fully assessed. Some European government officials also told us that OCCAR’s ability to centrally administer contracts is curtailed until OCCAR obtains legal authority. U.S. government and industry officials are watching to see whether OCCAR and other initiatives are fostering political pressure and tendencies toward pan-European exclusivity. As membership of the various European organizations expands, pressure to buy European defense equipment may increase. For example, according to some industry officials, the new European members of NATO are already being encouraged by some Western European governments to buy European defense products to ease their entry into other European organizations. While European government initiatives appear to be making slow, gradual progress, the European defense industry is attempting to consolidate and restructure through national and cross-border mergers, acquisitions, joint ventures, and consortia. European government and industry observers have noted that European defense industry is reacting to pressures from rapid U.S. defense industry consolidation, tighter defense budgets, and stronger competition in the global defense market. Even with such pressures, other observers have noted that European defense companies are consolidating at a slower pace than U.S. defense companies. The combined defense expenditures of Western Europe are about 60 percent of the U.S. defense budget, but Western Europe has two to three times more suppliers, according to a 1997 Merrill Lynch study. For example, the United States will have two major suppliers in the military aircraft sector (once proposed mergers are approved), while six European nations each have at least one major supplier of military combat aircraft. In terms of defense revenues, U.S. defense companies tend to outpace European defense companies. Among the world’s top 10 arms producing companies in 1994, 8 were U.S. companies and 2 were European companies. While economic pressures to consolidate exist, European defense companies face several obstacles, according to European government and industry officials. For example, national governments, which greatly influence the defense industry and often regard their defense companies as sovereign assets, may not want a cross-border consolidation to occur because it could reduce the national defense industrial base or make it too specialized. National governments further impede defense industrial integration by establishing different defense equipment requirements. Complex ownership structures also make cross-border mergers difficult because many of the larger European defense companies are state-owned or part of larger conglomerates. To varying degrees, defense industry restructuring has occurred within the borders of major European defense producing nations, including France, Germany, Italy, and the United Kingdom. In France, Thomson CSF and Aerospatiale formed a company, Sextant Avionique, that regrouped and merged their avionics and flight electronics activities. The French government initiated discussions in 1996 about the merger of the aviation companies Aerospatiale and Dassault, but negotiations are ongoing. In Germany, restructuring has primarily occurred in the aerospace sector. In 1995, Deutsche Aerospace became Daimler-Benz Aerospace, which includes about 80 percent of German industrial capabilities in aerospace. In Italy, by 1995 Finmeccanica had gained control of about three-quarters of the Italian defense industry, including Italy’s major helicopter manufacturer Agusta and aircraft manufacturer Alenia. In the United Kingdom, a number of mergers and acquisitions have occurred. For example, GKN purchased the helicopter manufacturer Westland and GEC purchased the military vehicle and shipbuilder VSEL in 1994. European companies have long partnered on cooperative armament programs for the development and production of large complex weapon systems in Europe. Often, a central management company has been created to manage the relationship between partners. For example, major aerospace companies from the United Kingdom, Germany, Italy, and Spain have created a consortium to work on the Eurofighter 2000 program. Another cooperative venture is the development of the European military transport aircraft known as the Future Large Aircraft. Companies from a number of European nations are forming a joint venture company for the development and production of this aircraft. Project based joint ventures are typically industry led, but they are established with the consent of the governments involved. (See table 2 for examples of European defense company cooperative business activities for major weapon programs.) Although most cross-border industry cooperation is project specific, European defense companies are also acquiring companies or establishing joint ventures or cross-share holdings that are not tied to a particular program. Some cross-border European consolidation has occurred in missiles, defense electronics, and space systems. For example, in 1996, Matra (France) and British Aerospace (United Kingdom) merged their missile activities to form Matra BAe Dynamics. Both companies retained a 50-percent share in the joint venture, but they have a single management structure and a plan to gradually integrate their missile manufacturing facilities. Figure 2 highlights some examples of consolidation in specific defense sectors. Despite attempts to develop a unified European armament policy, individual European governments still retain their own defense procurement policies. Key European countries, including France, Germany, Italy, the Netherlands, and the United Kingdom, vary in their willingness to purchase major U.S. defense equipment. These countries have been involved in efforts to form a unified European defense market, which some observers believe may lead to excluding U.S. defense companies from participating in that market. However, U.S. defense companies continue to sell significant defense equipment to certain European countries in certain product lines. Europe has a large, diverse defense industrial base on which key European nations rely for purchases of major defense equipment. As in the United States, these European countries purchase the majority of their defense equipment from national sources. For example, the United Kingdom aims to competitively award about three-quarters of its defense contracts, with U.K. companies winning at least 90 percent of the contracts over the past several years. According to French Ministry of Defense officials, imports represented only 2 percent of France’s total defense procurements over the past 5 years. Germany and Italy each produced at least 80 percent of their national requirements for military equipment over the past several years. Despite its relatively small size, the Dutch defense industry supplied the majority of defense items to the Netherlands. Notwithstanding European preference for domestically developed weapons, U.S. defense companies have sold a significant amount of weapons to Western European countries either directly or through the U.S. government’s Foreign Military Sales program. These sales tended to be concentrated in certain countries and products. U.S. foreign military sales of defense equipment to Europe accounted for about $20 billion from 1992 to 1996. Europe was the second largest purchaser of U.S. defense items based on arms delivery data, following the Middle East. The leading European purchasers of U.S. defense equipment were Turkey, Finland, Greece, Switzerland, the Netherlands, and the United Kingdom. U.S. defense companies had greater success in selling aircraft and missiles to Western Europe than they did for tanks and ships. Of the almost $20 billion of U.S. foreign military sales, about $15 billion, or 75 percent, was for sales of military aircraft, aircraft spares, and aircraft modifications. About $3 billion, or 13 percent of total equipment sales, was for sales of missiles. Ships and military vehicles accounted for $552 million, or less than 3 percent of the total U.S. defense equipment sales. Figure 3 shows U.S. defense equipment sales to Western Europe by major weapon categories. According to U.S. defense company officials, sales of military aircraft to Europe are expected to be important in future competitions, particularly in the emerging defense markets in central Europe. Competition between major U.S. and European defense companies for aircraft sales in these markets is expected to be intense. U.S. defense companies varied in their success in winning the major European defense competitions that were open to foreign bidders. The Netherlands and the United Kingdom have bought major U.S. weapon systems over the last 5 years, even when European options were available. The United States is the largest supplier of defense imports to both the Netherlands and the United Kingdom. Both of these countries have stated open competition policies that seek the best defense equipment for the best value. In the major defense competitions in these countries in which U.S. companies won, U.S. industry and government officials stated that the factors that contributed to the success included the uniqueness and technical sophistication of the U.S. systems, industrial participation opportunities offered to local companies, and no domestically developed product was in the competition. For example, in the sale of the U.S. Apache helicopter to the Netherlands and the United Kingdom, there was no competing domestically developed national option, the product was technically sophisticated, and significant industrial participation was offered to domestic defense companies. In the major defense competitions in which U.S. companies competed in the United Kingdom over the last 5 years, the U.K. government tended to chose a domestically developed product when one existed. In some cases, these products contained significant U.S. content. For example, in the competition for the U.K. Replacement Maritime Patrol Aircraft, the two U.S. competing products lost to a British Aerospace developed product, the upgraded NIMROD aircraft. This British Aerospace product, however, contained significant U.S. content with major components coming from such companies as Boeing. In the Conventionally Armed Standoff Missile competition, Matra British Aerospace Dynamics’ Stormshadow (a U.K.-French developed option) won. In this case, the competing U.S. products were competitively priced, met the technical requirements, and would have provided significant opportunities for U.K. industrial participation. Table 3 provides details on some U.K. major procurements in which U.S. defense companies competed. France has purchased major U.S. defense weapon systems when no French or European option is available. In contrast to the Netherlands and the United Kingdom, the French defense procurement policy has been to first buy equipment from French sources, then to pursue European cooperative solutions, and lastly to import a non-European item. Recently, French armament policy has put primary emphasis on European cooperative programs, recognizing that it will not be economical to develop major systems alone in the future. The procurement policy reflects France’s goal to retain a defense industrial base and maintain autonomy in national security matters. As illustrated in table 4, the French government made two significant purchases from the United States in 1995 when it was not economical for French companies to produce comparable equipment or when it would have taken too long to develop. Germany and Italy have made limited purchases of U.S. defense equipment in recent years because of significantly reduced defense procurement budgets and commitments to European cooperative projects. Both countries now have an open competition defense procurement policy and buy a mixture of U.S. and European products. The largest share of these countries’ defense imports is supplied by the United States. In recent major defense equipment purchases from the United States, both Germany and Italy reduced quantities to reserve a portion of their procurement funding for European cooperative solutions. For example, Italy purchased the U.S. C-130J transport aircraft but continued to provide funding for a cooperative European transport aircraft program. As in the other European countries, Germany and Italy encourage U.S. companies to provide opportunities for local industrial participation when selling defense equipment. Table 5 highlights German defense procurement policy and a selected major procurement. As European nations work toward greater armament cooperation, competition for sales in Europe is likely to increase. To mitigate potential protectionism and negative effects on U.S.-European defense trade, both the U.S. defense industry and government have taken steps to improve transatlantic cooperation. U.S. defense companies are taking the lead in forming transatlantic ties to gain access to the European market. The U.S. government is also seeking opportunities to form transatlantic partnerships with its European allies on defense equipment development and production, but some observers point to practical and cultural impediments that affect the extent of such cooperation. U.S. defense companies are forming industrial partnerships with European companies to sell defense equipment to Europe because of the need to increase international sales, satisfy offset obligations, and maintain market access. Most of these partnerships are formed to bid on a particular weapon competition. Some, however, are emerging to sell products to worldwide markets. According to U.S. defense companies, partnering with European companies has become a necessary way of doing business in Europe. U.S. government and defense company officials have cited the importance of industrial partnerships with European companies in winning defense sales there. Many of these partnerships arose out of U.S. companies’ need to fulfill offset obligations on European defense sales by providing European companies with subcontract work. When U.S. companies had to find ways to satisfy the customary 100-percent offset obligation on defense contracts in Europe, they began to form industrial partnerships with European companies. With the declining U.S. defense budget after the end of the Cold War, many U.S. companies began to look for ways to increase their international defense sales in Europe and elsewhere. According to some U.S. company officials, they realized that many European government buyers did not want to buy commercially available defense equipment but wanted their own companies to participate in producing weapon systems to maintain their defense industrial base. Forming industrial partnerships was the only way that U.S. companies believed they could win sales in many European countries that were trying to preserve their own defense industries. In addition, several U.S. company officials have indicated that European governments have been pressuring each other in the last several years to purchase defense equipment from European companies before considering U.S. options. These officials stated that even countries that do not have large defense industries to support were being encouraged by other European countries to purchase European defense equipment for the economic good of the European Union. U.S. company officials believe that by forming industrial partnerships with European companies, they increase their ability to win defense contracts in Europe. U.S. defense companies form a variety of industrial partnerships with European companies, including subcontracting arrangements, joint ventures, international consortia, and teaming agreements. The various examples of each are discussed in table 6. According to some U.S. defense company officials, most of U.S. industrial partnerships with European companies, whatever the form, are to produce or market a specific defense item. Some U.S. defense companies, however, are using the partnerships to create long-term alliances and interdependencies with European companies that extend beyond one sale. For example, Lockheed Martin has formed an industrial partnership with the Italian company Alenia to convert an Italian aircraft to satisfy an emerging market for small military transport aircraft. This arrangement arose out of a transaction involving the sale of C-130J transport aircraft to Italy. Some U.S. defense company officials see the establishment of long-term industrial partnerships as a way of improving transatlantic defense trade and countering efforts toward European protectionism. DOD has taken a number of steps over the last few years to improve defense trade and transatlantic cooperation. For example, it has revised its guidance on considering foreign suppliers in defense acquisitions and has removed some of the restrictions on buying defense equipment from overseas. In addition, senior DOD officials have shown renewed interest in international cooperative defense programs with U.S. allies in Europe and are actively seeking such opportunities. Despite some of these efforts, some observers have cautioned that a number of factors may hinder shifts in U.S.-European defense cooperative production programs on major weapons. The following U.S. policy changes have been made that may help to improve defense trade: A DOD directive issued in March 1996 sets out a hierarchy of acquiring defense equipment that places commercially available equipment from allies and cooperative development programs with allies, ahead of a new U.S. equipment development program. According to some U.S. government and defense industry officials, many military program managers traditionally would have favored a new domestic development program when deciding how to satisfy a military requirement. In April 1997, the Office of the Secretary of Defense announced that DOD would favorably consider requests for transfers of software documentation to allies. In the past, such requests were often denied, which was cited by U.S. government officials as a barrier to improve defense trade and cooperation with the United States. In April 1997, the Under Secretary of Defense (Acquisition and Technology) waived certain buy national restrictions for countries with whom the United States had reciprocal trade agreements. This waiver allows DOD to procure from foreign suppliers certain defense equipment that were previously restricted to domestic sources. European government officials have cited U.S. buy-national restrictions as an obstacle in the improvement of the reciprocal defense trade balance between the United States and Europe. DOD is also seeking ways to improve international cooperative programs with European countries through ongoing working groups and a special task force under the quadrennial review. Senior DOD officials have stated that the United States should take advantage of international armaments cooperation to leverage U.S. resources through cost-sharing and to improve standardization and interoperability of defense equipment with potential coalition partners. The U.S. government has participated in numerous international defense equipment cooperation activities with European countries, including research and development programs, data exchange agreements, and engineer and scientist exchanges, but these activities only occasionally resulted in cooperative production programs. More recently, senior DOD officials have provided increased attention to armaments cooperation with U.S. allies. In 1993, DOD established the Armaments Cooperation Steering Committee to improve cooperative programs. In its ongoing efforts, the Steering Committee established several International Cooperative Opportunities Groups in 1995 to address specific issues in armaments cooperation. In addition, the 1997 Quadrennial Defense Review to identify military modernization needs included an international cooperation task force to determine which defense technology areas the United States could collaborate on with France, Germany, and the United Kingdom. In March 1997, the Secretary of Defense signed a memorandum stating that “it is DOD policy that we utilize international armaments cooperation to the maximum extent feasible.” The U.S. government has a few ongoing cooperative development programs for major weapon systems, but most cooperative programs are at the technology level. Some observers indicated to us that there may be some impediments to pursuing U.S.-European defense cooperative programs on major weapon systems because (1) European procurement budgets are limited compared to the U.S. budget; (2) the potential that U.S. support for a program may change with each annual budget review may cause some European governments concerns; (3) despite changes in DOD guidance, many military service program managers may be reluctant to engage in international cooperative programs due to the significant additional work that may be required and potential barriers that may arise, such as licensing and technology sharing restrictions; (4) many U.S. program managers may not consider purchasing from a foreign source due to the perceived technological superiority of U.S. weapons; and (5) European and U.S. governments have shown a desire to maintain an independent ability to provide for their national defense. Efforts have been made to develop a more unified European armament policy and defense industrial base. As regional unification efforts evolve, individual European nations still independently make procurement decisions, and these nations vary in their willingness to buy major U.S. weapon systems when European options exist. To maintain market access in Europe, U.S. defense companies have established transatlantic industrial partnerships. These industrial partnerships appear to be evolving more readily than transatlantic cooperative programs led by governments. Although the U.S. government has recently taken steps to improve defense trade and cooperation, some observers have indicated that practical and cultural impediments can affect transatlantic cooperation on major weapon programs. In commenting on a draft of this report, DOD concurred with the report and the Department of Commerce stated that it found the report to be accurate and had no specific comments or recommended changes. The comments from DOD and the Department of Commerce are reprinted in appendixes II and III, respectively. DOD also separately provided some technical suggestions, which we have incorporated in the text where appropriate. To identify European government defense integration plans and activities, we examined European Union, WEU, OCCAR, and NATO documents and publications. We developed a chronology of key events associated with the development of an integrated European defense market. We interviewed European Union, Western European Armaments Group, OCCAR, and NATO officials about European initiatives affecting trade and cooperation and their progress in meeting their goals. We also discussed these issues with officials at the U.S. mission to NATO, the U.S. mission to the European Union, and U.S. embassies in France, Germany, and the United Kingdom. We interviewed or obtained written responses from officials from six major defense companies in France, Germany, and the United Kingdom about European industry consolidation. We identified relevant information and studies about European government and industry initiatives and discussed these issues with consulting firms and European think tanks. To assess how procurement polices of European nations affect U.S. defense companies’ market access, we focused our analysis on five countries. We selected France, Germany, and the United Kingdom because they have the largest defense budgets in Europe and their defense industries comprise 85 percent of European defense production. Italy and the Netherlands were selected because they are significant producers and buyers of defense equipment. These five countries are also current members or seeking membership in OCCAR. We interviewed officials from 13 U.S. defense companies on the basis of their roles as prime contractors and subcontractors and range of defense products sold in Europe. Most of these companies represented prime contractors. Eight of these were among the top 10 U.S. defense companies, based on fiscal year 1995 DOD prime contract awards. We also discussed the major defense competitions that U.S. companies participated in over the last 5 years and the factors that contributed to the competitions’ outcome with officials from these companies and with U.S. government officials. We discussed procurement policies with European and U.S. government officials. We met with Ministry of Defense officials in France, Germany, and the United Kingdom, as well as U.S. embassy officials in those countries. We did not conduct fieldwork in Italy or the Netherlands, but we did discuss these countries’ procurement policies with officials from their embassies in Washington, D.C. We also reviewed documents describing the procurement policies and procedures of the selected countries and U.S. government assessments and cables about major defense contract awards that occurred in these countries and discussed factors affecting these procurement awards with U.S. government and industry officials. We did not review documentation on the bids or contract awards. We collected and analyzed data on defense budgets and defense trade, including foreign military and direct commercial sales to identify buying patterns in Western Europe over the past 5 years. We only used the foreign military sales data to analyze sales by weapons category for the five countries and Western Europe. Direct commercial sales data, which are tracked by the State Department through export licenses, were not organized by weapon categories for the last 5 years. However, we reviewed congressional notification records for direct commercial sales over $14 million for the last 5 years to supplement our analysis of foreign military sales data. To determine actions the U.S. industry and government have taken in response to changes in the European defense environment, we interviewed defense company and U.S. government officials within DOD and the Departments of Commerce and State. With U.S. defense companies, we discussed their business strategies and the nature of the partnerships formed with European defense companies. We obtained and analyzed recently issued DOD directives and policy memorandums on defense trade and international cooperation and discussed the effectiveness of these policies with U.S. and foreign government officials and U.S. and European defense companies. We performed our review from January to September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretaries of State and Commerce. We will also make copies available to others upon request. Please contact me at (202) 512-4181 if you have any questions concerning this report. Major contributors to this report were Karen Zuckerstein, Anne-Marie Lasowski, and John Neumann. Western European Union (WEU) was established as a result of the agreements signed in Paris in October 1954 modifying the 1948 Brussels Treaty. Treaty of Rome wae signed creating the European community. The Independent European Programme Group was established to promote European cooperation in research, development, and production of defense equipment; improve transatlantic armament cooperation; and maintain a healthy European defense industrial base. The Treaty on European Union was signed in Maastricht but was subject to ratification. The WEU member states also met in Maastricht and invited members of the European Union to accede to WEU or become observers, and other European members of the North Atlantic Treaty Organization (NATO) to become associate members of WEU. The Council of the WEU held its first formal meeting with NATO. The European Defense Ministers decided to transfer the Independent European Programme Group's functions to WEU. The Maastricht Treaty was ratified and the European Community became the European Union. French and German Ministers of Defense decided to simplify the management for joint armament research and development programs. The proposal for a Franco-German procurement agency emerged. A NATO summit was held, which supported developing of a European Security and Defense Identity and strengthening the European pillar of the Alliance. WEU Ministers issued the Noordwijk Declaration, endorsing a policy document containing preliminary conclusions of the formation of the Common European Defense policy. The European Union Intergovernmental Conference, or constitutional convention, convened. The Defense Ministers of France, Germany, Italy, and the United Kingdom signed the political foundation document for the joint armaments agency Organisme Conjoint de Cooperation en Matiere d'Armament (OCCAR). The Western European Armaments Organization was established, creating a subsidiary body within WEU to administer research and development contracts. The four National Armaments Directors of France, Germany, Italy, and the United Kingdom met during the first meeting of the Board of Supervisors of OCCAR. The board reached decisions about OCCAR's organizational structure and programs to manage. The European Union Intergovernmental Conference concluded. A new treaty was drafted, but little advancement was made to developing a common foreign and security policy. The treaty called for the European Union to cooperate more closely with WEU, which might be integrated in the European Union if all member nations agree. The Board of Supervisors of OCCAR held a second meeting. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the changes that have taken place in the European defense market over the past 5 years, focusing on: (1) what actions European governments and industry have taken to unify the European defense market; (2) how key European countries' defense procurement practices have affected U.S. defense companies' ability to compete on major weapons competitions in Europe; and (3) how the U.S. government and industry have adapted their policies or practices to the changing European defense environment. GAO's review focused on the buying practices of five European countries--France, Germany, Italy, the Netherlands, and the United Kingdom. GAO noted that: (1) pressure to develop a unified European armament procurement policy and related industrial base is increasing, as most nations can no longer afford to develop and procure defense items solely form their own domestic companies; (2) European governments have taken several initiatives to integrate the defense market, including the formation of two new organizations to improve armament cooperation; (3) European government officials remain committed to cooperative programs, which have long been the impetus for cross-border defense cooperation at the industry level; (4) some European defense companies are initiating cross-border mergers that are not tied to government cooperative programs; (5) although some progress toward regionalization is occurring, European government and industry officials told GAO that national sovereignty issues and complex ownership structures may inhibit European defense consolidation from occurring to the extent that is needed to be competitive; (6) until European governments agree on a unified armament policy, individual European countries will retain their own procurement policies; (7) like the United States, European countries tend to purchase major defense equipment from their domestic companies when such options exist; (8) when national options do not exist, key European countries vary in their willingness to buy major U.S. weapon systems; (9) trans-Atlantic industrial partnerships appear to be evolving more readily than trans-Atlantic cooperative programs that are led by governments; (10) U.S. defense companies have established these trans-Atlantic partnerships largely to maintain market access in Europe; (11) U.S. defense company officials say they cannot export major defense items to Europe without involving European defense companies in the production of those items; (12) some U.S. defense companies are seeking long-term partnerships with European companies to develop a defense product line that will meet requirements in Europe or other defense markets; (13) they believe such industrial interdependence can also help counter any efforts toward U.S. or European protectionism and may increase trans-Atlantic defense trade; and (14) the U.S. government has taken several steps over the last few years to improve defense trade and trans-Atlantic cooperation, but some observers point to practical and cultural impediments that affect U.S.-European cooperation on major weapon programs.
The CDC’s National Immunization Program provides grants to states and 28 urban IAP areas for the purpose of controlling vaccine-preventable diseases. The Congress made available at least $142 million for these grants in fiscal 1995. The portion of these funds received by a particular grantee is based largely upon the amount received the previous year. In addition to these funds, consistent with statements of the Senate Appropriations Committee, CDC has awarded annual incentive grants to states since fiscal 1994 to improve the immunization levels of 2-year-olds.For awards in fiscal 1994, CDC allocated incentive grants based on state-supplied estimates of the percentage of fully immunized 2-year-olds. To establish a common basis for awarding subsequent grants and to monitor progress toward early childhood immunization objectives, the CDC designed, and starting in fiscal 1994 began to conduct, the National Immunization Survey. In fiscal 1996, $33 million was allocated for such incentive grants. With the advent of the NIS, states had no further obligation to produce statewide coverage estimates and were able to use the grant funds formerly devoted to such measurement for other activities. However, most states’ former methods for estimating immunization coverage were much less expensive than the NIS, which CDC has heretofore financed at no cost to the states. Lately, CDC has made inquiries of state health officers regarding their willingness to devote certain percentages of grant funds to support the NIS (see p. 9). To meet CDC’s former requirement for measuring preschoolers’ immunization coverage, all states used either school retrospective surveys or other population-based methods to estimate immunization coverage. Most states estimated immunization coverage among preschoolers through reviewing the immunization records of children entering first grade or kindergarten to determine whether their immunizations were up-to-date when the children were younger, typically when they were 2 years old. This method has both disadvantages and advantages. It produces estimates that are about 3 years old by the time the data are gathered, and immunizations may be selectively collected on school records relative to the minimum state requirements for school entry, which vary to some extent across states and may not include the newer vaccines. Because the retrospective method uses data that are already collected for the purpose of verifying immunization at school entry, it is fairly inexpensive and enables some states to develop estimates of immunization coverage at substate levels for the use of counties or state health districts. Records for those immunizations required for school entry should provide more accurate dates of immunization than can be obtained in interviews with parents, who frequently do not have ready access to immunization information. Those states that did not use the retrospective method used others, such as birth certificate surveys and registry-based methods, that required more original data collection than the retrospective survey, but produced more current coverage estimates while providing the states with other benefits or additional information about their specific activities. In 1995, CDC dropped its requirement that grantees produce an independent assessment of preschool immunization coverage with the view that the estimates from its new NIS would supplant the data that had formerly been gathered by grantees. In general, in assessing the quality of survey findings, analysts should consider a variety of types of error that may affect a survey result. These include errors that arise because (1) surveys only involve a sample of the population of interest, (2) some of the sampled individuals may not respond to the survey, and (3) some of the population of interest may not be covered by the group from which the sample was chosen. In addition, there are problems associated with interviewers, the respondent, or the questionnaire, such as unclear questions or respondents’ difficulty in recalling the answers. What is commonly quoted in the reporting of poll results as the “margin of error,” typically plus or minus 3 percent for a random sample of 1,500, represents only the error attributable to the first factor named above. Assessing the quality of survey results also requires considering the extent to which the other sources of error may have affected the accuracy of survey findings. To respond to your request, we met with officials of the National Immunization Program and the National Center for Health Statistics and with staff of the CDC contractor conducting the National Immunization Survey. We reviewed documents describing the structure, performance, and results of the survey. We also reviewed literature on telephone survey methodology and parental recall of children’s immunization status. The methodology report for the 1995 survey was not available as of June 18, 1996, when we conducted our exit conference with CDC, and thus, our review of survey methodology was limited to the procedures employed in the 1994 survey and reports of NIS findings issued through June 1996. We understand from NCHS officials that, since issuance of the 1994 methodology report, procedures for using provider data to adjust survey results have been documented and sensitivity analyses have been conducted to measure the impact of changes in various assumptions inherent in the adjustment of survey results. To provide information on survey costs, we requested that agency officials provide data on total payments under the survey contract and estimates of the costs of related agency activities. We also reviewed the survey contract and trends in the costs billed under the contract. We did not independently verify the payments for the survey or the CDC cost estimates, though we did review the invoices from the survey contractor and assess the agency’s cost estimates for their consistency with the activities the agencies conducted. With this exception, our work was conducted in accordance with generally accepted government auditing standards between March 25 and June 18, 1996. Finally, we surveyed state immunization program managers regarding how they had used the results of the NIS and their costs for previous survey approaches. In addition, CDC provided a list of six former CDC contractors, officials, and current grantees that they recommended we contact. We contacted some of these individuals and asked them to provide comments consistent with their familiarity with the survey’s cost and methodology. The cost of the survey includes three major components—expenditures under the contract issued to conduct the survey and the costs of survey-related activities conducted by NCHS and NIP. Both NCHS and NIP were involved in managing the data collection contract and providing statistical analysis of survey data. In addition to these roles, NIP gathered and reviewed data from the survey respondents’ immunization providers. When problems with survey software created a need for a larger interviewing staff, some work was done by the Bureau of the Census, but costs for this work are included in estimates provided by NCHS. Table 1 shows the costs of the NIS contract, survey assistance provided by NCHS and the Bureau of the Census, and NIP’s survey-related activities. Only two quarters of data were collected in fiscal 1994; 1995 was the first fiscal year in which the survey operated in all four quarters. Extraordinary expenses were incurred in fiscal 1995 when the agency discovered it needed to reinterview 1994 survey participants in order to identify their immunization providers. The contract to conduct the NIS provides the recipient with a fixed fee and all reasonable costs for conducting the survey. Expenditures under the survey contract have risen at twice the rate anticipated at its signing, reaching nearly the full face amount of the contract halfway through the 54-month performance period. Contractor and agency representatives attribute the higher rate of expenditures to difficulties arising from the need to replace survey management software; the higher-than-expected number of calls required to identify households in the sampling frame; and the addition of a study to check parents’ responses against provider records, which increased the complexity of estimating survey results because of the need to adjust them with provider-derived information. The number of calls required to identify eligible households will continue to be an important determinant of survey costs. According to estimates from CDC and invoices from the contractor, costs for the NIS have been roughly $25 million through March 30, 1996, including the $13 million for fiscal 1995, the first complete year of data collection. Insofar as a number of extraordinary expenses were incurred in fiscal 1995, CDC officials anticipate that final survey costs will decrease in fiscal 1996 and future years. However, for fiscal 1997, the agency has requested $16 million for the survey and its administration, as it requested and received in fiscal years 1995 and 1996 based on expenditures in the early implementation stage of the survey. CDC officials indicated that the balance of funds received in 1995 for the NIS (about $3 million) was spent on other assessment activities, such as the NHIS and its provider record check study, the Clinic Assessment Software Application, and the provision of technical assistance to the states. However, we have not independently verified this information. In its report accompanying the fiscal 1996 appropriations, the Senate Appropriations Committee noted its concern that the national findings of the NIS duplicate the findings from the NHIS and that the annual cost of the survey cannot be justified by its utility. The Committee noted particularly that the survey does not provide significant information on high-risk communities for targeting purposes, and in some respects, it duplicates surveys conducted by each state. In the justification for its fiscal 1997 budget request, CDC acknowledged these concerns and noted that it was holding ongoing discussions with, among others, the Association of State and Territorial Health Officials (ASTHO) and the Council of State and Territorial Epidemiologists in which “various options related to the NIS” were being considered. For example, CDC explored with ASTHO the level of willingness among state health officers to finance the survey through state grant funds distributed by CDC rather than directly through CDC appropriations. However, ASTHO surveys of its members found that many of the larger states and urban areas were not prepared to devote 6-10 percent of their immunization infrastructure grants to support of the survey. This is consistent with the findings of our survey of state immunization program managers, which indicated that while the NIS findings were widely used to communicate with the news media and respond to legislative inquiries, they were not used by most states for targeting their activities or designing interventions. NIS surveyors identify households with children between 19 and 35 months old by dialing random telephone numbers and asking a short set of screening questions to assess the presence of children in the correct age range. Surveyors ask for the number of doses of various vaccines the child has received and a variety of demographic information. Even with sampling refinements implemented by the contractor, only a small proportion of randomly generated telephone numbers results in contacting a residence that includes children between 19 and 35 months old. CDC reported that roughly 1.2 million telephone numbers were called to complete 25,247 interviews during the first three quarters of data collection (47 numbers per respondent, with an average of 4-5 calls per number required to reach a respondent). Thus, roughly 200 calls are initiated per completed interview. In view of the size of this undertaking, there was some thought at the time the survey was planned of using it to gather additional health data, but these plans never came to fruition and the final survey addressed only immunization issues. “only when the first-phase element survey costs are smaller than those for the second phase by a large factor . . . the first-phase sample identifies the members of the rare population inexpensively, and the survey items are then collected from them in the second phase.” For the NIS, the reverse is true. It appears that CDC is spending a large sum of money on the first phase of the survey, which provides low-quality immunization data but identifies the sample for the second phase, which provides high-quality immunization data from provider records, albeit for a smaller number of children. Although the provider-supplied data improves the accuracy of survey results, earlier recognition of the problems with relying solely on household data might have led to consideration of more efficient data collection methods. As of June 1996, summary coverage estimates had been published for the first five quarters of NIS data collection (April 1994-June 1995). CDC shares the survey results with state programs shortly before their publication in Morbidity and Mortality Weekly Report. Thus, the survey findings are available to states and the general public about a year after data are collected. “The high nontelephone noncoverage rates in many of the IAP areas and the large differences between telephone and nontelephone children’s vaccination rates indicate that the potential for noncoverage bias is considerable in several IAP areas. Any candidate estimation technique for the NIS must recognize this potentially large bias, and attempt to adjust for differences between the telephone and nontelephone groups.” Appendix III shows the estimated percentage of households with a 2-year-old child that lack a telephone in each of the IAP areas, and table 2 provides national data from the 1992 and 1993 National Health Interview Surveys detailing the difference in reported immunization rates between children in households with and without telephones. Although only about 5 percent of all U.S. households lack a telephone, the absence of one is more than twice as common in households with children under 2 years old (11.7 percent). However, these national data mask the wide variation among IAP areas in the percentages of households with children under 2 lacking telephones, which ranges from 2 to 25 percent across the 50 states and 28 urban IAP areas. Exclusion of households without a telephone requires that the survey results be adjusted to account for the positive bias that may result. However, there is no consistent source of information on the immunization rates among children in households without telephones in each area where the NIS is conducted. Consequently, the adjustment for noncoverage of children without telephones is based on a complex procedure involving the application of a statistical model of the probability that a fully vaccinated child in a related national survey resides in a household with a telephone. It is not possible to know whether these adjustments are accurate in each of the states and urban areas covered by the NIS. The response rate is the estimated proportion of the target group (in this case, households with telephones and age-eligible children) that actually provided data. This rate is important in evaluating survey findings because, to the extent that nonrespondents might have answered differently from those who completed the survey, a large nonresponse rate indicates that survey findings will incorporate bias and require adjustment. For example, CDC analyses of NIS respondents indicated that, as a group, they differed in some respects from census and vital statistics estimates for the population; they slightly overrepresented mothers with more than 12 years of education and in some areas were more likely to report household incomes exceeding $50,000 and less likely to report income below $10,000. Thus, answers from those types of respondents who tended to be underrepresented were weighed more heavily in adjusting survey results to arrive at final coverage estimates. Such adjustments will remove bias to the extent that immunization coverage is similar between respondents and demographically similar nonrespondents. However, there is no clear way to test this assumption in the various areas surveyed. For the calendar year 1994 survey, contractors estimated that the overall response rate was 69.5 percent. Appendix III identifies the overall response rates reported for each surveyed area. Although households determined to be eligible through their completion of the screening questions had high rates of cooperation with the full interview, they represented a smaller portion of the potential households than would have been expected based on census data, indicating that some 17.3 percent of eligible households with telephones were never reached, refused cooperation during the screening phase, or inaccurately responded to the questions about age-eligible children. Although a response rate in this range is not atypical of telephone surveys, nonresponse rates tend to run higher for telephone interviewing than for personal visitation. Also, while overall response rates varied tremendously across states and urban areas, nonresponse to particular questions ranged as high as 26 percent. When combined, these factors sometimes reduce to below 50 percent the effective response rates for key questions (for example, how many times has your child received a polio vaccine?), raising concerns about the accuracy of resulting estimates. The potential to use household surveys for the collection of childhood immunization data is limited by the accuracy with which household respondents can supply information on children’s immunization status.Data available to CDC before the initiation of the NIS, including a report commissioned by the agency in 1975 to review the United States Immunization Survey, questioned the assumption that parents could accurately recall immunization history. Even as the NIS was initiated in 1994, NCHS had a study in progress to assess the accuracy of responses to the immunization supplement of the NHIS. It is well documented that survey respondents have trouble accurately recalling the occurrences of, distinctions among, and number of, events that are not particularly salient, or that are similar in nature, or that are repeated more than a few times over a long time period. As a result, when surveyed, they sometimes forget when the events occurred and are confused as to how many of which types of events occurred. As a rule, if events are socially desirable, respondents tend to overreport them. The NIS asks about the receipt of 14 different immunizations, given in repeated sets, varying in number, over a 1- to 3-year period. Respondents may not understand the differences among the various types of shots and probably consider getting shots socially desirable. As noted, these elements are among the factors associated with inaccurate reporting. To the extent that a parent is able to answer from an up-to-date vaccination record, few of these errors would occur, but significant portions of NIS respondents did not have a shot card and consequently reported from memory. Others apparently used shot cards that were not up-to-date. In December 1994, after the first two quarters of NIS data collection, CDC acknowledged the need to check parents’ responses against provider records. At that time, NCHS had determined from its surveys assessing the accuracy of parental responses to immunization questions in the NHIS that household respondent reports of vaccinations contain a number of errors that result in underestimation of the “true” vaccination coverage levels. NCHS concluded that, although respondent information was necessary for estimation and demographic analysis, household respondent records of immunizations are often not sufficiently up-to-date to provide accurate information, errors in reports from recall exist, and the household information must be adjusted using provider data. Using the findings from the NHIS substudy, NCHS and NIP attempted to adjust the NIS estimates. However, these adjustments resulted in estimates that did not differentiate the IAP areas. Therefore, CDC determined that a provider substudy similar to the one being conducted in connection with the NHIS was needed to produce accurate vaccination coverage level estimates from the NIS. We reviewed the level of agreement between household reports and physician records from the NIS substudy and confirmed that it was generally only “poor” or “fair” based on the application of recognized statistical criteria. Earlier recognition of this problem might have led to more serious consideration of other survey methods. The survey plan called for precision of plus or minus 5 percent for a coverage estimate of 50 percent, meaning that the margin of error would have been narrower for more extreme coverage estimates. Owing to various factors, the actual estimates produced by the survey in its first year had margins of error that were often larger. As these margins of error increase, the survey’s capacity to detect changes in immunization coverage decreases: it becomes more difficult to distinguish a change of a particular size from simple error in the estimates. CDC officials have indicated that the survey is useful in that it permits them to rank states and helps to motivate the lower ranking states to take positive action to improve immunization coverage. However, partly because survey estimates did not meet planned levels of precision, there appear to be remarkably few differences across states. For example, for the most recently published four quarters of NIS data (quarter 3 of 1994 through quarter 2 of 1995), in 31 states, the estimated percentage of children up-to-date in their immunizations could not be statistically distinguished from the national percentage of children up-to-date. (See figure 1.) Moreover, the survey is unlikely to show change from quarter to quarter. The Final Sampling Plan for the survey notes, “it will only be possible to detect very large changes between adjacent annualized estimates.” For example, a move from 50- to 70-percent coverage would have been the smallest detectable change had the planned level of precision been achieved. As a result, there are no statistically significant changes in full coverage across the first three sets of survey results published by CDC for any of the 78 states or urban areas surveyed. The smallest change that the survey is likely to detect between successive years for a particular IAP area (for example, quarters 1-4, 1995, versus quarters 1-4, 1996) may in some areas approach the size of the largest change observed between successive years in recent years’ data from the NHIS for antigens that had been recommended before every child in the survey cohort was born. Thus, even if changes of a typical size were occurring, the survey results might create the false impression of a lack of progress. At a minimum, the survey’s broad margins of error indicate that reporting such statistics each quarter is neither necessary nor advisable. Moreover, the imprecision of the survey estimates combined with their narrow range raises questions about whether the survey provides an improved basis for distribution of incentive funds across states. NCHS officials acknowledged that they had considered reporting the results only semi-annually. However, even this may be too frequent. For those vaccines that have been recommended for a number of years—measles, polio, 3 doses of diphtheria, tetanus, and pertussis—coverage is 80 percent or higher, limiting the size of any increases that might occur. CDC officials have indicated that they view identification of pockets of children in need of more timely immunization as a state responsibility rather than a federal one. Although a departmental statement accompanying the fiscal 1997 budget request had indicated the NIS would be useful in identifying pockets of need, HHS officials told us that the statement was in error. CDC has indicated that the National Immunization Survey was not designed to identify such “pockets of need,” and consequently, it does not do so. Our survey of state immunization program managers confirmed that they generally drew upon other data for this purpose. Instead, the primary objectives CDC has for the NIS have been monitoring state progress in achieving childhood immunization objectives, permitting comparison of current coverage rates across states, and awarding incentive funds available to CDC grantees based on their immunization of certain percentages of preschool children. In this connection, we note that the accomplishment of national immunization goals is simultaneously tracked through supplements to the NHIS and that the cost of mounting the NIS (roughly $13 million in fiscal 1995) has been large relative to the total amount of incentive funds it is used to distribute ($33 million in fiscal 1996). We have noted above the survey’s limitations for monitoring changes in immunization coverage. Although the NIS can produce national statistics for some nongeographically defined subgroups, the sample size of the NIS is not large enough to provide subgroup statistics for each state or urban area. On a national basis, the NHIS provides these same subgroup statistics with the exception of immunization coverage estimates for persons of Hispanic and Asian origin. CDC has suggested that the NIS can be used to evaluate immunization activities; however, the NIS does not currently collect information that could link immunization coverage to specific programs. For example, CDC has encouraged immunization among in participants in the Special Supplemental Food Program for Women Infants, and Children (WIC). However, state estimates of immunization coverage by WIC participation derived from the NIS would have unacceptably large sampling error unless the survey sample size were increased at substantial expense. We have not had the opportunity to assess the NIS in light of the list of additional purposes for the survey provided to us by HHS after our exit conference on this study. Further, our survey of state immunization directors turned up anecdotal evidence that a few states view the NIS favorably even though they are unable to use it to target pockets of underimmunized children. However, while the NIS has provided estimates of current state-specific immunization levels for awarding incentive grants and monitoring progress toward early childhood immunization objectives, it has significant limitations when used for these purposes. First, of the appropriation that it has requested for fiscal 1997, CDC has requested $16 million for the survey and its administration. However, the actual costs of the NIS are now expected to be between $12 and $13 million, and even these amounts would render it an inefficient method of allocating incentive grants expected to total $33 million. Second, the NIS does not provide useful quarterly measurements of statewide immunization levels, and even annual estimates may not be suitable for monitoring the level of annual change that is likely to occur in immunization coverage. Third, the NIS does not assist in the systematic targeting of underimmunized children, a particular concern if HHS is to achieve levels of disease reduction and elimination established as goals for the end of this decade. To follow up on this report, we intend to continue to study the various means of identifying pockets of children in need of immunization. State officials did make use of the NIS findings in communicating with their legislators and the press; however, these objectives could be met by previous methods at markedly lower cost. Moreover, the survey provides only a statewide or citywide indicator of immunization coverage. Insofar as this indicator is not linked to any specific component of the unique set of immunization initiatives pursued by a particular CDC grantee, it is not surprising that it is not useful in helping states to diagnose problems in their ongoing activities, target their efforts, or design interventions. CDC has also stressed the motivational benefit of ranking states. Apart from the concerns we have raised about the survey’s capacity to rank states, it is difficult to quantify the benefits of this ranking. In view of these limitations, the Congress may wish to reconsider the NIS’s benefits relative to its cost. At a minimum, the Congress may want to ensure that the CDC appropriation reflects a more accurate estimate of the survey’s cost. We provided a draft of this report to CDC officials for their comments, which are reprinted in appendix IV. CDC does not dispute the cost we reported for the NIS or that CDC’s fiscal 1997 budget request for the survey exceeds by at least $3 million the survey costs the agency anticipates in fiscal 1997. CDC disagrees with some of our findings regarding the survey’s methodology and our suggestion that the Congress may wish to consider NIS’ benefits relative to its costs. However, the agency bases some of its objections on statements that incorporate inaccurate representations of our findings regarding the validity of survey estimates and factual and technical errors, which we have identified in appendix IV. CDC indicates that following our presentation of our findings to the agency in late June, we failed to assess all the benefits of the survey that they had identified. However, the additional benefits asserted by CDC after our work was completed break no new ground. Each of these putative benefits stems from the use of the survey findings to compare state performance, monitor changes in immunization coverage across time, or evaluate intervention efforts. However, with few exceptions, our findings cast doubt on the appropriateness or practicality of such uses of survey results in view of the survey’s broad margins of error for particular states and urban areas, the generally high level of coverage for individual vaccines, and the difficulty of attributing changes across time or place to any particular causal factor. CDC asserts that the survey provides an early warning of precipitous changes in immunization coverage; however, we are concerned that the survey may lend a false sense of security by obscuring the existence of substantial pockets of underimmunized children. For example, a recent household survey of central and southeast Seattle found an immunization coverage rate of 57 percent, in contrast to the 79 percent reported by the NIS for the King County area incorporating Seattle. Further, NIS data are not generally analyzed and released until a year after data collection. We agree with the CDC that the survey is technically capable of detecting changes in use of newly introduced vaccines, but CDC already monitors these changes on a national basis through its NHIS. Other means, such as sales and distribution reports, may be available for monitoring the initial uptake of newer vaccines at less expense. Some data from the late 1980s indicated that immunization coverage levels in the preschool population were quite low and highly variable across areas. While the NIS might have been more useful under those circumstances, it appears the situation has changed. Coverage for particular diseases is now quite high, and coverage for long-recommended vaccines has not been highly variable across states. While the survey does provide more timely immunization coverage data than the retrospective surveys that were formerly used for such data collection, it does so at much higher cost. Thus, in the interest of using immunization resources most efficiently, we have suggested that the cost of collecting and analyzing these data be weighed against their continued utility. As we agreed with your office, we are sending copies of this report to other interested congressional committees, the Secretary of HHS, the Director of CDC, and other federal and state officials. We will also make copies available to others upon request. If you have any questions or would like additional information, please contact me, at (202) 512-3092, or Sushil K. Sharma, Assistant Director, at (202) 512-3460. Other major contributors to this report are listed in appendix V. This schedule was approved by the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, and the American Academy of Family Physicians. Vaccines are listed under the routinely recommended ages. Bars indicate the range of acceptable ages for vaccination. Shaded bars indicate catch-up vaccination: at 11-12 years of age, hepatitis B vaccine should be administered to children not previously vaccinated, and varicella zoster virus vaccine should be administered to children not previously vaccinated who lack a reliable history of chicken pox. The urban IAP project names are listed below in roman type; the names in bold identify the largest city in the IAP county project. Atlanta, Georgia (Fulton/DeKalb Counties) Baltimore, Maryland Birmingham, Alabama (Jefferson County) Boston, Massachusetts Chicago, Illinois Cleveland, Ohio (Cuyahoga County) Columbus, Ohio (Franklin County) Dallas, Texas (Dallas County) Detroit, Michigan El Paso, Texas (El Paso County) Houston, Texas Indianapolis, Indiana (Marion County) Jacksonville, Florida (Duval County) Los Angeles, California Memphis, Tennessee (Shelby County) Miami, Florida (Dade County) Milwaukee, Wisconsin (Milwaukee County) Nashville, Tennessee (Davidson County) New Orleans, Louisiana New York City, New York Newark, New Jersey Philadelphia, Pennsylvania (Philadelphia County) Phoenix, Arizona (Maricopa County) San Antonio, Texas (Bexar County) San Diego, California (San Diego County) San Jose, California (Santa Clara County) Seattle, Washington (King County) Washington, DC (District of Columbia) Jefferson County (Birmingham) Maricopa County (Phoenix) Duval County (Jacksonville) Dade County (Miami) Fulton/DeKalb County (Atlanta) Marion County (Indianapolis) (continued) New York City (5 counties) (continued) Shelby County (Memphis) Davidson County (Nashville) Bexar County (San Antonio) King County (Seattle) (Table notes on next page) By area surveyed, for quarters 2 through 4, 1994. The following are GAO’s comments on the Department of Health and Human Services’ letter dated July 22, 1996. 1. CDC has mischaracterized our findings. Although we have identified several issues that raise questions about accuracy, neither we nor CDC can validate the accuracy of survey results. The accuracy of the NIS results depends on the accuracy of the assumptions inherent in CDC’s adjustment of the survey results, some of which are untestable. The results of the NHIS are used to adjust the results of the NIS. Thus, while the similarity of the two surveys is reassuring, the NHIS cannot provide an independent assessment of the NIS’ accuracy. In any event, the agreement of the national estimates does not ensure that the local estimates are accurate. 2. The various benefits asserted by CDC derive from the application of the NIS to monitoring immunization rates and to comparing them across states. We acknowledged both of these objectives in the second paragraph of our report. Many potential benefits or purposes could be asserted for the survey, but its use in any of these capacities is limited by the low precision, narrow range, and unverified accuracy of the survey estimates. 3. It is true that surveys, to varying degrees, customarily require the types of adjustments applied to the NIS to correct for biases introduced by nonresponse and limitations in survey coverage. However, the adjustment of NIS results for exclusion of households without telephone service required a somewhat greater leap of faith than customary adjustments for telephone noncoverage. The success of such adjustments usually depends on the extent to which the variable being measured can be accurately predicted by demographic characteristics that are available or can be inferred for both nontelephone and telephone households. As we have noted in the report, based on data from the NHIS, which is an in-person survey, there are large differences in immunization coverage between children in households with and without telephones. These differences are not completely explained by demographic differences between telephone and nontelephone households. Furthermore, although telephone ownership varies substantially across the surveyed areas, there are no consistent sources of state and local data on differences in immunization coverage between telephone and nontelephone households. Consequently, the extent to which this adjustment improved the accuracy of state and local survey results is unclear. 4. CDC has acknowledged that the NIS does not identify pockets of children in need of more timely immunization, and most state immunization program managers have told us that the NIS does not help them in targeting their efforts or designing interventions, although it does relieve them of CDC’s previous requirement that they collect statewide coverage data on their own. We are studying alternative means for identifying pockets of need. Although there is currently no other means of comparing statewide immunization coverage data, the NHIS, as we have noted, tracks coverage changes at the national level. In addition, other methods were used in the past to collect statewide coverage information, albeit through a variety of methods across states. 5. It is true that the sample size of the NIS should afford the calculation of rates for such subgroups on a national basis. The NHIS is not currently large enough to provide childhood immunization coverage information on these two groups. 6. CDC states that the NIS is an “important public health management tool” and notes that Missouri, Arizona, and Idaho have taken steps intended to improve immunization coverage in the wake of NIS results. However, we have some concern that the NIS provides no guidance on the type of action that is appropriate or where it is appropriate. It is not necessarily clear that placing special emphasis on the states with the lowest survey estimates for coverage with a combination of four vaccines is the most appropriate way to prevent a disease outbreak. States with high estimates may nonetheless include significant pockets of underimmunized children. 7. CDC provides no evidence that the NIS is cost-effective. As we note in our conclusion, it is markedly more expensive than the retrospective surveys previously used to generate statewide coverage data. Presuming that the capacity to measure differences between states is an important objective, the NIS’ capacity to meet this objective is limited by the broad margins of error in survey estimates and variations in survey participation and coverage. It is similarly limited with respect to monitoring changes in immunization coverage across time. As with previous state surveys, there is no guarantee that the NIS provides unbiased estimates of immunization coverage. 8. The NIS can detect small changes on a quarterly basis only at the national level. Survey results are not released until roughly a year after data collection, and it is doubtful that a 1-percent change in national coverage should or would be construed as an early warning in the context of very high vaccine-specific rates. In any case, national coverage statistics are also available from the NHIS. Availability of the NIS results did not prevent the recent outbreak of measles in Utah. Sudden drops in immunization levels for a particular disease in other countries have been associated with problems, such as sudden concerns about vaccine safety, that were evident apart from immunization measurement. There was concern and widespread publicity in the mid-1970s in both the United Kingdom and Japan about reports of encephalitis following the receipt of pertussis vaccine. The reduced utilization of this vaccine was precipitous and observable from sources other than national survey data. 9. While states with lower immunization estimates may be motivated by the NIS findings to improve coverage, the findings do not indicate where the problem lies within these states or what corrective actions are needed. We remain concerned that they may provide a false sense of security to other areas that actually face significant problems (for example, specific pockets of low immunization within states with generally high coverage rates). In addition, the motivational effects of such quarterly ranking may diminish over time. Finally, CDC’s argument presumes that states will be more motivated to act by data collected through the NIS than they would have been by data collected locally or through other means. We disagree. 10. The CDC has indicated that the NIS was not intended to identify pockets of need and consequently does not do so. The NIS may actually deflect attention from some serious problem areas because they are incorporated in larger areas for survey purposes. For example, the Seattle-King County Department of Public Health and the University of Washington conducted a separate household survey of Central and Southeast Seattle using the same age group and reference dates as the NIS, but finding that 57 percent of children in this part of the city were fully immunized, in contrast to the NIS rate of 79 percent up-to-date for all of King County in the same time period.11. Because of the wide margins of error of survey estimates, the NIS is probably not sufficiently sensitive to permit evaluation of interventions or policy changes in particular areas or subgroups. Although national changes in immunization coverage may be monitored with greater precision, changes in national or local immunization coverage might be attributable to factors other than policy changes (for example, trends in the demographic characteristics of children to be immunized). Moreover, policy changes typically occur in groups and are implemented gradually, which would make it quite difficult to attribute any observed movements in immunization coverage to a single change or a combination of changes. In this context, it seems inadvisable to draw conclusions about particular state activities based solely on the results of the NIS. Similarly, with cross-state comparisons, multiple interventions are linked to each area and subgroup, as well as variations in demographic and other factors, making it difficult to disentangle the reasons for any differences observed across states and cities in the NIS findings. 12. We agree that the NIS is technically capable of detecting the rapid and dramatic changes in coverage that typically accompany the recommendation of new vaccines. However, on a national level, the NHIS also reports on the uptake of newly recommended vaccines. Sales and distribution reports may provide a less expensive means of monitoring the uptake of such vaccines in particular areas. 13. Even small states had produced statewide coverage estimates using previous methods. However, it is difficult for small states to justify the use of $165,000 in infrastructure funding for a random digit dialing immunization survey such as the NIS. Under a proposal CDC has floated with states, surveys in small states would be subsidized by “contributions” of a percentage of federal grant funds from larger states. However, in view of immunization needs, 20 state health officers surveyed by ASTHO could not justify devoting 6.5-10 percent of their infrastructure funds to survey support. Twenty-four states told ASTHO they were willing to contribute 10 percent of their 1995 infrastructure grant toward the survey in the event that federal funding was discontinued, but their prospective contributions would have totaled $4.6 million—much less than the survey’s reported annual cost. 14. As we have noted, the precision of current estimates raises questions about whether the survey does, in fact, provide an improved basis for the distribution of incentive funds. Moreover, the amount expended on the survey is substantial in comparison to the amount of such funds available for distribution. 15. Most state immunization program managers indicated that the NIS results were not useful in targeting their activities. Although a low result may provide some states with a general incentive to do better, it provides no guidance as to how to accomplish any improvement. 16. The collection of such data will enhance the information derived from the tremendous number of phone contacts with ineligible households made in conducting the NIS. However, the collection of immunization data may continue to drive the number of calls required (and hence the cost of the survey) because households containing two-year-olds would likely continue to be the rarest population sampled. In any case, the utility of the survey for collecting other data does not bear upon its usefulness for collecting information on immunization. 17. CDC agrees with the cost we reported for the NIS. We did not verify CDC’s claims regarding its use of the funds that were not applied to the survey. While CDC anticipates that future costs will be lower, it has not requested modification of its fiscal 1997 budget request to reflect these lower costs. 18. The poor quality of immunization data gathered from household respondents had been documented before the NIS was planned. Thus, although the provider surveys may have reduced the inaccuracies contained in these household data, the survey might have been more efficiently designed had the limitations of household data been acknowledged in survey planning. Earlier recognition of this problem would have supported more serious exploration of other survey methods. 19. It should be noted that CDC’s comments compare the survey estimates to a standard different from the target established in the contract and survey plan. Survey plans are ordinarily drawn by determining the sample size necessary to achieve an acceptably precise result if the value of the measured variable is near 50 percent, the point at which the largest sample will be required to achieve a given level of precision (for example, plus or minus 5 percent with 95-percent confidence). This is exactly the sampling target specified in CDC’s contract with the survey organization. Insofar as the immunization levels measured by the survey are well above 50 percent, had the targets established in the contract been met, the estimates would show precision better than plus or minus 5 percent. Further, CDC’s statement that, “Seventy-one of the 78 areas met or exceeded the requirement that the margin of error be within five percent of the value of the estimate itself,” does not conform to the first four quarters of survey results published by CDC (see MMWR, Feb. 23, 1996, pp. 148-49). These indicate that, for 4:3:1 coverage, only 23 of the 78 estimates met or exceeded the criterion that the margin of error be within 5 percent of the value of the estimate itself. For 4:3:1:3 coverage, the number meeting or exceeding this criterion was only 16 of the 78. Whether the survey estimates met this or any other criterion is less important than the fact that their precision, if not improved, is generally only sufficient to detect, reliably, changes of a size larger than has typically been observed on an annual basis. While the addition of provider data has helped correct some substantial errors incorporated in household responses, it has not reduced the margins of error for survey estimates. 20. We do not find that the survey documents high levels of variability in results across IAPs. Although CDC correctly states that Alabama’s result was statistically different from the result for 21 other IAP areas (11 states and 10 cities), it cannot be statistically distinguished from the results in 56 others. CDC is correct that, in most cases, differences of at least 10 points can be statistically distinguished, as we show in figure 1 for 4:3:1 coverage, but there is only a 24-point range in the state estimates for full coverage, so the majority of the state estimates—31—are not far enough apart for their difference from the national estimate to be confidently attributed to anything more than sampling error. The range of estimates for coverage with particular vaccines is generally narrower. 21. The NIS can detect reasonably small changes in national coverage between consecutive four-quarter annualized estimates, though the first two successive annualized estimates for 4:3:1 coverage were not different. However, even at the national level, for most of the antigens and series, the smallest reliably detectable change (at conventional levels of significance) is slightly larger than 1 percent. At conventional levels of significance, it is impossible to judge differences as small as 5 percent to be statistically significant when most estimates have 95 percent margins of error of 5 percent or greater. Our report quotes a statement in a document issued by the survey contractor noting that the survey can detect only very large changes (for example, a 20-percent increase from 50 percent) between successive quarterly annualized estimates in the various areas surveyed. The margins originally planned would have been no larger than plus or minus 5 percent. However, survey documentation NCHS provided to us notes that “Confidence intervals for the vaccination coverage estimates are somewhat wider than originally planned because provider information is not available for all children in the sample.” In addition, for data collected in quarters 2 through 4 of 1994, the number of completed child-level interviews was less than 90 percent of the sample size called for in the design specifications for roughly a third of the IAPs. This too, would have the effect of increasing the margins of error for survey estimates. 22. While the NIS applies the same methodology across states, the range of state results is not as broad as expected and the performance of many states cannot be differentiated. In any case, in making such comparisons with the NIS, it is important to take into account the wide variations in survey coverage and response rates across states and urban areas. 23. We noted that the retrospective survey approach has both advantages and disadvantages, including the timeliness of data. Retrospective surveys do not produce results as quickly as the NIS; however, even the NIS issues results about a year after data collection, and thus it appears equally ill-suited to provide an early warning. 24. As we have noted in appendix III, the NIS in some areas excludes a similar proportion of children living in households without telephone service. 25. This is generally true, although the costs of a household survey can be comparable in some urban areas, as suggested by recent experience in Norfolk and Seattle. 26. There may be some economies of scale in centralizing the surveys under a single contract, but these must be weighed against the costs of limiting potential bidders to firms equipped to handle a task of this large scale. Conducting separate surveys would have the advantage of permitting the questions to be tailored to provide additional data about state and local initiatives. 27. It is true that the full cost of a random digit dialing survey such as the NIS would be more difficult for smaller states to bear. ASTHO officials reported that many smaller states were unwilling to continue participation in the survey if it meant funding the full cost of their own random digit dialing survey through their infrastructure funding. However, it should be noted that all states have recent experience conducting other types of statewide immunization surveys. 28. Minimal staff hours are generally involved in retrospective surveys. While this is not true of household surveys, states may also contract for such services if they continue to be required. 29. As noted in our report, the Congress may wish to weigh the cost of the NIS against its benefits in order to ensure the most efficient use of immunization resources. George Bogart, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO assessed the Centers for Disease Control and Prevention's (CDC) National Immunization Survey (NIS), focusing on: (1) survey costs; (2) survey methods; and (3) use in identifying groups of children in need of more timely immunization. GAO found that: (1) CDC designed NIS for monitoring state progress in achieving child immunization objectives, comparing coverage rates across states, and awarding incentive funds; (2) CDC estimates and contractor invoices indicate that NIS costs for fiscal year (FY) 1995, including extraordinary expenses incurred when 1994 survey participants were reinterviewed, were about $13 million; (3) although CDC anticipates that survey costs will decrease in the future, it has requested $16 million for NIS administration for FY 1997; (4) the two-phase survey methodology, which gathers information by telephone from households and immunization providers, excludes households that lack a telephone, may not accurately represent the overall population, and is limited by response accuracy; (5) NIS has not achieved sufficient precision in its survey estimates to detect modest changes that occur in most coverage levels; (6) CDC considers the identification of groups of children in need of more timely immunization as a state rather than a federal responsibility and has not designed and does not use NIS to make such identifications; and (7) interviews with state officials indicate that NIS is not useful in helping states to diagnose problems in immunization activities, target efforts, or design interventions.
VA’s Office of Small and Disadvantaged Business Utilization (OSDBU) has overall responsibility for the verification program. OSDBU’s Center for Verification and Evaluation (CVE) maintains the mandated database of verified SDVOSBs and VOSBs and is responsible for verification operations, such as application processing. VA’s verification process consists of reviewing and analyzing a standardized set of documents submitted with each verification application. VA uses contractors to support its verification program and federal employees oversee the contractors and review and approve verification decisions. As of September 1, 2015, CVE had 15 federal employees and 156 contract staff (employed by five different contractors) verifying applications or filling supporting roles. CVE is funded by VA’s Supply Fund, a self-supporting revolving fund that recovers its operating expenses through fees and markups on different products or services. CVE’s final obligations for fiscal year 2014 were $17.9 million and its approved budget for fiscal year 2015 was $16.1 million, representing a decrease of about 10 percent ($1.8 million) from 2014. We and VA’s Office of Inspector General previously found that VA has faced numerous challenges in operating the verification program. Our most recent work on this program in 2013 found that VA had made significant changes to address previously identified program weaknesses, but that it still faced challenges establishing a stable and efficient program to verify firms on a timely and consistent basis. Specifically, we found that VA consistently placed a higher priority on addressing immediate operational challenges than on developing a comprehensive, long-term strategic focus for the verification program—an approach that contributed to programmatic inefficiencies. We also found that VA’s case management data system had shortcomings that hindered the agency’s ability to operate, oversee, and monitor the program. Therefore, we recommended that VA (1) refine and implement a strategic plan with outcome-oriented long-term goals and performance measures, and (2) integrate efforts to modify or replace the program’s data system with a broader strategic planning effort to ensure the system addresses the program’s short- and long-term needs. VA adopted a strategic plan in 2013 and efforts to update its case management system are ongoing. In 2014, VA launched the MyVA Reorganization Plan in an effort to improve the efficiency and effectiveness of VA’s services to veterans. The plan’s strategy emphasizes improved service delivery, a veteran-centric culture, and an environment in which veteran perceptions are the indicator of VA’s success. MyVA extends to all aspects of the agency’s operations, including the verification program. In response to this organizational change, OSDBU is required to align its own strategy with MyVA and take steps to make its operations more customer service- oriented and veteran-centric. Based on our preliminary observations, VA has improved its timeliness for application processing, followed its policies for verifying businesses, continued to refine quality controls for the program, and improved communications with veterans. For instance, CVE reported its processing times have improved by more than 50 percent since October 2012, going from an average processing time of approximately 85 days to 41 days in fiscal year 2015. Additionally, VA officials told us that they have been generally meeting their processing goal of 60 days (from receipt of a complete application) and only had 5 applications in fiscal year 2014 and 11 applications in fiscal year 2015 for which it did not meet this goal. Our review of randomly selected application files corroborates that CVE has generally met its processing goals, but the verification process can take longer from a veteran’s perspective. In calculating processing times, CVE excludes any time spent waiting for additional information it asked firms to supply, so the actual number of days it takes an applicant to become verified is typically longer than what CVE reports. Our preliminary estimates are that it takes an average of 56 days (without stopping the regulatory clock while the veteran is preparing and submitting additional documents) from when CVE determines a firm’s application is complete to when the firm receives notification of the verification determination. During that time, CVE is reviewing the application and potentially requesting and waiting for the applicant to submit additional information. Additionally, firms can submit and withdraw their application multiple times should they need to correct issues or wish to apply at a later date. Each time a firm resubmits an application, CVE resets the application processing clock, meaning that CVE’s average case processing time does not account for instances where a firm withdraws and resubmits an application. VA officials said that allowing applicants to withdraw and resubmit multiple applications is an advantage to the veteran because veterans can make several attempts to become verified, and without allowing veterans to withdraw their applications, more veterans would receive denials and have to wait 6 months before submitting another application. However, this means that some veterans might perceive the application process as lengthy if they have submitted and withdrawn several applications in their attempt to become verified. For example, we estimated that for 15 percent of applications, it took the firm more than 4 months from the initial application date to receive a determination from CVE. Based on our initial review of application files, VA appeared to follow its policies and procedures for verifying SDVOSBs and VOSBs, which includes checking the veteran and disability status of the applicant, conducting research on the firm from publicly available information, and reviewing business documents to determine compliance with eligibility requirements, such as direct majority ownership by the veteran, experience of the veteran manager, and the SBA small business size standard. But, we also found that VA did not have a policy requiring documentation of the rationale for assigning risk level to the application, and did not document the rationale in an estimated 40 percent of the cases. VA recently implemented a procedure (October 2015) to require documentation of the rationale after we notified the agency of this finding. CVE has continued to refine its quality management system since our January 2013 report. For example, CVE has developed detailed written work instructions for each part of the verification process, and developed a quality manual that documents the requirements of its quality management system. CVE officials said they update the work instructions on a regular basis. Additionally, CVE implemented an internal audit and continuous improvement process. As of September 2015, CVE had taken action on and closed 364 of 379 (96 percent) internal audit recommendations made from June 2014 through August 2015. Based on our review of internal audits conducted by CVE from September 2014 through February 2015, the findings generally identified information that was incomplete, unclear, missing, or not applicable to the current verification process. CVE also conducted post-verification site visits to 606 firms in fiscal year 2015 to check the accuracy of verification decisions and help ensure that firms continued to comply with program regulations. CVE officials said the site visits identified two instances in which evaluators mistakenly verified a firm (a less than 1 percent error rate), and CVE issued 25 cancellations to firms found noncompliant with program regulations at the time of the site visit (a 4 percent noncompliance rate). CVE also monitors compliance by investigating potentially noncompliant firms identified through tips from external sources. CVE officials said they received about 400 such tips in 2014. Officials said that they investigate every credible tip by conducting public research, reviewing eligibility requirements related to the tip, and making a recommendation for corrective action, if necessary. We reviewed case files associated with 10 firms for which CVE received allegations of noncompliance from June 2014 through May 2015. These cases included one with an active status protest (a mechanism for interested parties to a contract award to protest if they feel a firm misrepresented its SDVOSB or VOSB status in its bid submission) and nine firms for which CVE received an e-mail allegation that the firm was not in compliance with program regulations (a few of these firms also recently received a status protest decision). CVE investigated 6 of 10 cases we reviewed, although it did not always document that an allegation of noncompliance had been received or that it was conducting a review of the firm’s eligibility based on the allegation. In comparison, anytime a protest was filed against a verified firm, the case file had a note indicating the firm was the subject of a status protest and verification activities should be put on hold until the protest was resolved. We will continue to monitor these issues and report our final results early next year. Our preliminary work revealed that since our 2013 report, VA has made several changes to improve veterans’ experiences with the verification program and reduced the percentage of firms that receive denials from 66 percent in 2012 to 5 percent in 2015, according to agency data. A few examples include the following. VA implemented procedures to allow firms to withdraw applications in order to avoid denials. For example, veterans can correct minor deficiencies or withdraw an application to address more complex problems instead of receiving a denial decision and having to wait 6 months to reapply. VA established procedures to communicate with verified firms and applicants about their verification status. According to VA officials, the agency sends e-mail reminders 120, 90, and 30 days before the expiration of a firm’s verification status, contacts firms by telephone 90 days before expiration of verification status; and notifies firms in writing 30 days before cancelling verified status. Officials said they also send notifications to applicants to indicate that an application is complete, additional documents are needed, and that a determination has been made. VA partnered with Procurement Technical Assistance Centers— funded through cooperative agreements with the Department of Defense—to provide verification assistance to veterans at no cost. VA trained more than 300 procurement counselors at the centers on the verification process so they could better assist veterans applying for verification. VA increased interaction with veterans by conducting monthly pre- application, reverification, and town hall webinars to provide information and assistance to verified firms and others interested in the program. VA provided resources for veterans on its website, such as fact sheets, verification assistance briefs, and standard operating procedures for the verification program. VA also has a tool on its website that allows firms to obtain a list of documents required for their application depending on the type of company they own. VA developed surveys to obtain feedback from firms (1) that go through the verification process, (2) that receive a site visit, (3) that leave the program, and (4) that participate in any pre-verification information sessions. CVE officials stated that they hope these surveys will allow them to more systematically collect feedback on different aspects of the program. All of the verification assistance counselors and representatives of veterans’ service organizations with whom we spoke noted that VA has improved its verification process, although most had some recommendations for areas for continued improvement. Three of the four verification assistance counselors we spoke with stated that VA’s new policies to allow veterans to withdraw or submit changes to their application represented a positive change. Representatives of one veterans’ group we spoke to stated that VA was doing a better job communicating with applicants on missing documentation and other potential issues. They also said VA was interacting more with veteran service organizations and veterans at conferences for veteran-owned small businesses and town hall meetings. However, three of the four verification assistance counselors noted that resources on VA’s website for the verification program can be difficult to locate and representatives from one veteran service organization said VA does not provide adequate documentation of the program standards for applicants. VA officials said they have been working with the strategic outreach team in OSDBU to redesign the website to make documents easier to locate. Additionally, we determined that the standard operating procedures—documents to help veterans understand the verification process—posted on the website were from 2013 and did not reflect current procedures, such as the ability to withdraw an application after CVE’s evaluation. When we notified VA of this issue, the agency updated the program’s website to reflect current procedures and implemented a policy to review and update the operating procedures every 6 months. All of the verification assistance counselors we interviewed also stated that VA’s determination letters to applicants could be clearer and that they include regulatory compliance language that could be difficult for some applicants to understand. VA officials maintained that the inclusion of regulatory language in the determination letters was necessary, but acknowledged that this language can present readability challenges. We also observed several instances in our review where a letter initially stated that documents were due on one date, and then later stated the applicant should disregard the initial statement and that documents were due on a different, earlier date. VA officials said this was due to a glitch in the system that generated the letters and this issue was resolved in May 2015. Despite the significant improvements VA has made to its verification program, it continues to face challenges establishing a more cost- effective, veteran-friendly verification process, and acquiring an information technology system that meets the agency’s needs. The efforts that VA has either made or currently has underway include restructuring the verification process, revising verification program regulations, changing the program’s organizational structure, and developing a new case management system—some of which have been ongoing since our January 2013 report. While these efforts are intended to help address some of the challenges associated with the verification program, VA lacks a comprehensive operational plan with specific actions and milestone dates for managing these efforts and achieving its long-term objectives for the program. Changes in the verification process. VA intends to restructure part of the verification process in an effort to make it more veteran-focused and cost-effective. According to OSDBU’s Executive Director, VA embarked on these changes in response to the agency’s new MyVA strategy and requests from the Supply Fund to design a veteran-centered process that highlights customer service and maximizes cost efficiency. In August 2015, VA began a pilot for a new verification process that makes a case manager the point of contact for the veteran and the coordinator of staff evaluating the application. According to the Executive Director, the new process is expected to provide cost savings to the agency by reducing the amount of time staff spend reviewing applications and addressing veterans’ questions. Officials said the specific tasks staff perform to review applications would not change; rather, the new process would eliminate some redundancies and focus on the veteran’s experience. Key differences between the new and current processes as described by CVE officials are shown in table 1. According to CVE officials, as of September 2015, 43 applications had been reviewed using the new pilot process and VA had begun collecting feedback from applicants. VA also has developed metrics to inform adjustments to the pilot and plans to calculate processing times for each application, according to CVE officials. Officials stated that VA plans to finalize the new process in October 2015 and fully transition to the new process by April 2016. VA has not yet conducted an analysis to determine the cost of the new pilot process as compared with the current process, but OSDBU’s Executive Director said that he estimates the new pilot process will save the program about $2 million per year. Revisions to regulations. VA is continuing to make revisions to its program regulations. In 2013 we reported that VA had begun the process of modifying the verification program regulations to extend the verification period from 1 year to 2 years and published an interim final rule to this effect in late June 2012. In addition, VA began a process in 2013 to revise program regulations in order to account for common business practices that might otherwise lead to a denial decision under the current regulation. For example, in addressing the challenges associated with one current regulatory provision, VA officials told us that VA plans to allow minority owners to vote on extraordinary business decisions such as closing or selling the business according to CVE officials. Officials stated that the revisions to the regulation are not expected to provide cost and resource efficiencies, but are intended to provide clarity for veterans and increase their satisfaction with the process. As of September 2015, the regulation was undergoing internal review with VA’s Office of General Counsel according to CVE officials. Approach to site visits. According to CVE officials, VA plans to determine how many site visits should be conducted annually to maintain the quality of the program while minimizing cost. CVE officials told us that they plan to visit a random sample of 300 of 2,312 verified firms that received VA contracts from March 2014 through April 2015 fiscal years 2014 and 2015 and then calculate the percentage of firms found to be noncompliant with program requirements. A high noncompliance rate could indicate that VA should increase the annual number of visits, while a low rate could indicate that VA should decrease or maintain the annual number of site visits it conducts, according to CVE officials. VA officials said that the statistical analysis will allow them to validate the noncompliance rate obtained from site visits conducted in fiscal year 2014 and that VA plans to complete its study by January 2016. We plan to include additional information on this study in our upcoming report. Reverification policy. VA revised its reverification policy in an effort to improve efficiency and customer service. According to CVE’s Acting Director, reverification used to require nearly the same effort of CVE staff, contractors, and veterans as the full verification process. Under a new process CVE implemented in October 2015, CVE contractors are to conduct an initial meeting with the veteran to identify necessary documentation based on changes to the company since its last verification. These changes are intended to improve veterans’ understanding of the requirements for reverification, and reduce the amount of time spent re-verifying applications, according to CVE officials. However, it is not yet clear how the change to the reverification procedure will impact the number and type of documents veterans will be required to submit. In addition, VA analyzed data obtained from its fiscal year 2014 site visits and concluded that there is no correlation between a firm’s noncompliance and the time passed since its last verification. According to information provided by CVE officials, the agency therefore may be able to reduce the number of site visits conducted each year by lengthening the 2-year reverification cycle. Staffing and organizational structure. VA plans to fill vacant leadership positions and make changes to CVE’s organizational structure to reflect the new verification process and align staffing resources with agency needs. In 2010, we noted that leadership and staff vacancies had contributed to the slow pace of implementation of the verification program. CVE has since filled most of its vacant positions. However, staffing at the senior level has been in flux. Since 2011, CVE has had three different directors, the last two of which have been acting directors. The deputy director position also was vacant from March 2014 to September 2015. OSDBU’s Executive Director (who has overseen the overall verification program since 2011) indicated that VA would begin advertising for a CVE director in October 2015. VA has developed a draft organizational structure and position descriptions for the new verification process. According to CVE officials, it also has begun an analysis—using initial data from the new verification process pilot—to determine optimal staffing levels for implementing the new process and meeting the demand for verification. CVE officials stated that VA plans to continue using contractor staff to conduct its verification activities because the use of such staff allows VA the flexibility to adjust staffing levels as needed. As discussed earlier, CVE currently has 15 full- time federal employees and 156 contract staff. OSDBU’s Executive Director stated that VA has contracts in place for the verification program through April 2016 and plans to start the process for securing new contracts in January 2016. Plans for case management system. VA has faced delays in replacing the verification program’s outdated case management system. In our January 2013 report, we also identified deficiencies in VA’s data system—such as a lack of certain data fields and workflow management capabilities needed to provide key information on program management—and recommended that VA modify or replace the system. VA hired a contractor in September 2013 to develop a new system but the contract was cancelled in October 2014 due to poor contractor performance. VA paid the contractor about $871,000 for work that had been performed prior to the contract’s termination, and received several planning documents from the contractor that helped inform its current acquisition effort, according to CVE officials. VA has since decided to develop a pilot case management system through one of the agency’s other existing contracts. According to VA officials, the pilot system is intended to provide VA with the opportunity to test and evaluate the capabilities of a new system without the time and expense of putting a whole new system in place. VA developed specifications and other planning documents for the pilot system, and plans to develop and evaluate the system from November 2015 through January 2016. If the pilot is successful, VA plans to issue a solicitation and award a contract for development of a full system by April 2016 and fully transition to the new system by September 2016. VA was in the initial stages of developing the pilot system as of October 2015, and has not determined how it will select cases for the pilot, evaluate the pilot, and fully transition to the new system once the pilot is complete. VA has taken some steps to address our previous recommendations, but our preliminary findings indicate that additional steps may be needed. In our January 2013 report, we found that VA faced challenges in its strategic planning efforts and recommended that VA refine and implement a strategic plan with outcome-oriented long-term goals and performance measures. VA developed a strategic plan for fiscal years 2014–2018 that described OSDBU’s vision, mission, and various performance goals for its programs. It has since developed an operating plan for fiscal year 2016 that identifies a number of key actions needed to meet OSDBU’s objectives, such as transitioning to a new verification process, completing revisions to the verification regulations, and developing a new case management system. But, the plan does not have an integrated schedule that includes specific actions and milestone dates for achieving program changes or discuss how the various efforts described above might be coordinated. Useful practices and lessons learned from organizational transformation show that organizations should set implementation goals and a timeline to build momentum and show progress from day one. These practices also show that it is essential that organizations undergoing a transformation establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. According to OSDBU’s Executive Director, each OSDBU program team (such as CVE) is to develop action plans for their specific programs that include resource needs and expected timelines. However, it is not clear if OSDBU will develop an overall plan that captures and integrates the various efforts it has been undertaking that are managed by CVE and other program teams within OSDBU. We are continuing to assess the issues discussed in this statement and as we finalize our work for issuance early next year, we will consider making recommendations, as appropriate. Chairmen Coffman and Hanna, Ranking Members Kuster and Takai, and Members of the Subcommittees, this concludes my prepared statement. I would be happy to answer any questions at this time. If you or your staff have any questions about this statement, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Harry Medina (Assistant Director); Katie Boggs (Analyst-in-Charge), Mark Bird, Charlene Calhoon, Pamela Davidson, Kathleen Donovan, John McGrail, Barbara Roesmann, and Jeff Tessin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA must give contracting preferences to service-disabled and other veteran-owned small businesses and verify the ownership and control of firms seeking such preferences. GAO found in 2013 ( GAO-13-95 ) that VA faced challenges in verifying firms on a timely and consistent basis, developing and implementing long-term strategic plans, and enhancing information technology infrastructure. This testimony discusses preliminary observations on (1) VA's progress in establishing a timely and consistent verification program and improving communication with veterans, and (2) the steps VA has taken to identify and address verification program challenges and long-term goals. This statement is based on GAO's ongoing review of VA's verification program. GAO reviewed VA's verification procedures and strategic plan, reviewed a random sample of 96 verification applications, and interviewed VA officials, and representatives from two veterans' organizations, and four verification assistance counselors. Based on GAO's preliminary observations, the Department of Veterans Affairs (VA) has made significant improvements to its verification process and communication with veterans since GAO's 2013 report. VA reported it reduced its average application processing times by more than 50 percent—from 85 days in 2012 to 41 in 2015. GAO reviewed a randomly selected sample of verification applications and found that VA followed its procedures for reviewing applications. VA continued to refine its quality management by developing written work instructions for every part of the verification process, and implemented an internal audit process. As of September 2015, VA had taken action on and closed 364 of 379 (96 percent) of internal audit recommendations. The agency also conducted post-verification site visits to 606 firms in fiscal year 2015 to check the accuracy of verification decisions and help ensure continued compliance with program regulations. Since 2013, VA has made several changes to improve veterans' experiences with the program. For example, VA revised procedures to allow veterans additional opportunities to withdraw their applications or submit additional information and has partnered with federally supported assistance centers to provide assistance to veterans applying for verification. Correspondingly, the percentage of firms that received denials has dropped from 66 percent in 2012 to 5 percent in 2015. Veterans' organizations and verification counselors with whom GAO spoke noted improvements in VA's communications and interactions with veterans, although most verification counselors we spoke with suggested the program's website and letters to veterans could be clearer. VA has multiple efforts underway to make its verification program more cost-effective and veteran-friendly, but GAO's preliminary results indicate that it lacks a comprehensive operational plan to guide its efforts. For instance, VA intends to restructure part of its verification process and in August 2015, began a pilot that gives veterans one point of contact (a case manager, who would be aware of the specifics of the application throughout the verification process). VA plans to fully transition to this new process by April 2016. VA also plans to change the program's organizational structure and hire a director for the program, which has had three different directors, the last two of which have been acting directors, since 2011. Finally, VA plans to replace the program's outdated case management system, but has faced delays due to contractor performance issues. Efforts are under way to develop and evaluate a pilot system by January 2016 and fully transition to the new case management system by September 2016. VA has developed a high-level operating plan that identified objectives for the office overseeing the verification program—the Office of Small and Disadvantaged Business Utilization (OSDBU). But the plan does not include an integrated schedule with specific actions and milestone dates for achieving the multiple program changes under way or discuss how these various efforts might be coordinated within OSDBU. GAO's work on organizational transformations states that organizations should set implementation goals and a timeline to show progress. Such a plan is vital to managing multiple efforts to completion and achieving long-term program objectives, particularly when senior-level staffing for the verification program has lacked continuity. GAO continues to assess these issues and will report its results early next year. GAO is not making recommendations at this time; as it finalizes its work for issuance early next year, it will consider making recommendations, as appropriate. GAO obtained comments from VA and incorporated them as appropriate.
AOC and its contractors have continued to make progress on the project since the Subcommittee’s July 14 hearing. However, mostly because some key activities associated with the HVAC and fire protection systems were not included in earlier schedules and because delays occurred in installing stonework and excavating the utility tunnel, the sequence 2 contractor’s August schedule shows the expected completion date for the base project as February 26, 2007. As discussed at the Subcommittee’s July 14 hearing, AOC recognized some delays in its June 2005 schedule, which showed the base project’s expected completion date as October 19, 2006. Although AOC has not evaluated the contractor’s August schedule, it does not believe that so much additional time will be needed. Furthermore, as discussed in the next section, AOC maintains that work could be accelerated to meet the September 15, 2006, target date. According to our analysis of the CVC project’s schedule, the base project is unlikely to be completed by the September 15, 2006, target date for several reasons. AOC believes that it could take actions to complete the project by then, but these actions could have negative as well as positive consequences. These and other schedule-related issues raise a number of management concerns. We have discussed actions with AOC officials that we believe are necessary to address problems with the schedule and our concerns. AOC generally agreed with our suggestions. For several reasons, we believe that the base project is more likely to be completed sometime in the spring or summer of 2007 than by September 15, 2006: As we have previously testified, AOC’s sequence 2 contractor, Manhattan Construction Company, has continued to miss its planned dates for completing activities that we and AOC are tracking to assist the Subcommittee in measuring the project’s progress. For example, as of September 8, the contractor had completed 7 of the 16 selected activities scheduled for completion before today’s hearing (see app. II); however, none of the 7 activities was completed on time. Unforeseen site conditions, an equipment breakdown, delays in stone deliveries, and a shortage of stone masons for the interior stonework were among the reasons given for why the work was not completed on time. Our analysis of the sequence 2 contractor’s production pace between November 2004 and July 2005 indicates that the base project’s construction is unlikely to be finished by September 15, 2006, if the contractor continues at the same pace or even accelerates the work somewhat. In fact, at the current or even a slightly accelerated pace, the base project would be completed several months after September 15, 2006. To finish the base project’s construction by that date, our analysis shows that the sequence 2 contractor would have to recover 1 day for every 8 remaining days between July 2005 and September 2006 and could incur no further delays. We continue to believe that the durations scheduled for a number of sequence 2 activities are unrealistic. According to CVC project team managers and staff, several activities, such as constructing the utility tunnel; testing the fire protection system; testing, balancing, and commissioning the HVAC system; installing interior stonework; and finishing work in some areas are not likely to be completed as indicated in the July 2005 schedule. Some of these are among the activities whose durations we identified as optimistic in early 2004 and that we and AOC’s construction management contractor identified as contributing most to the project’s schedule slippage in August 2005; these activities also served as the basis for our March 2004 recommendation to AOC that it reassess its activity durations to see that they are realistic and achievable at the budgeted cost. Because AOC had not yet implemented this recommendation and these activities were important to the project’s completion, we suggested in our May 17 testimony before the Subcommittee that AOC give priority attention to this recommendation. AOC’s construction management contractor initiated such a review after the May 17 hearing. Including more time in the schedule to complete these activities could add many more weeks to the project’s schedule. AOC’s more aggressive schedule management is identifying significant omissions of activities and time from the sequence 2 schedule. AOC’s approach, though very positive, is coming relatively late in the project. For example, several detailed activities associated with testing, balancing, and commissioning the CVC project’s HVAC and fire protection system were added to the schedule in July and August, extending the schedule by several months. AOC believes, and we agree, that some of this work may be done concurrently, rather than sequentially as shown in the August schedule, thereby saving some of the added time. However, until more work is done to further develop this part of the schedule, it is unclear how much time could be saved. Furthermore, the July schedule does not appear to include time to address significant problems with the HVAC or fire alarm systems should they occur during testing. In August 2005, CVC project personnel identified several risks and uncertainties facing the project that they believed could adversely affect its schedule. Examples include additional unforeseen conditions in constructing the utility and House Connector tunnels; additional delays in stonework due to slippages in stone deliveries, shortages of stone masons, or stop-work orders responding to complaints about noise from work in the East Front; and problems in getting the HVAC and fire protection systems to function properly, including a sophisticated air filtration system that has not been used before on such a large scale. Providing for these risks and uncertainties in the schedule could add another 60 to 90 days to the completion date, on top of the additional time needed to perform activities that were not included in the schedule or whose durations were overly optimistic. Over the last 2 months, AOC’s construction management contractor has identified 8 critical activity paths that will extend the base project’s completion date beyond September 15, 2006, if lost time cannot be recovered or further delays cannot be prevented. These 8 activity paths are in addition to 3 that were previously identified by AOC’s construction management contractor. In addition, the amount of time that has to be recovered to meet the September 15 target has increased significantly. The activity paths include work on the utility tunnel and testing and balancing the HVAC system; procuring and installing the control wiring for the air handling units; testing the fire alarm system; millwork and casework in the orientation theaters and atrium; and stonework in the East Front, orientation theaters, and exhibit gallery. Having so many critical activity paths complicates project management and makes on-time completion more difficult. AOC believes it can recover much of the lost time and mitigate remaining risks and uncertainties through such actions as using temporary equipment, adding workers, working longer hours, resequencing work, or performing some work after the CVC facility opens. AOC said that it is also developing a risk mitigation plan that should contain additional steps it can take to address the risks and uncertainties facing the project. Various AOC actions could expedite the project and save costs, but they could also have less positive effects. For example, accelerating work on the utility tunnel could save costs by preventing or reducing delays in several other important activities whose progress depends on the tunnel’s completion. Conversely, using temporary equipment or adding workers to overcome delays could increase the project’s costs if the government is responsible for the delays. Furthermore, (1) actions to accelerate the project may not save time; (2) the time savings may be offset by other problems; or (3) working additional hours, days, or shifts may adversely affect the quality of the work or worker safety. In our opinion, decisions to accelerate work must be carefully made, and if the work is accelerated, it must be tightly managed. Possible proposals from contractors to accelerate the project by changing the scope of work or its quality could compromise the CVC facility’s life safety system, the effective functioning of the facility’s HVAC system, the functionality of the facility to meet its intended purposes, or the life-cycle costs of materials. In August, project personnel raised such possibilities as lessening the rigor of systems’ planned testing, opening the facility before all planned testing is done, or opening the facility before completing all the work identified by Capitol Preservation Commission representatives as having to be completed for the facility to open. While such measures could save time, we believe that the risks associated with these types of actions need to be carefully considered before adoption and that management controls need to be in place to preclude or minimize any adverse consequences of such actions, if taken. AOC’s schedule presents other management issues, including some that we have discussed in earlier testimonies. AOC tied the date for opening the CVC facility to the public to September 15, 2006, the date in the sequence 2 contract for completing the base project’s construction. Joining these two milestones does not allow any time for addressing unexpected problems in completing the construction work or in preparing for operations. AOC has since proposed opening the facility to the public on December 15, 2006, but the schedule does not yet reflect this proposed revision. Specifically, on September 6, 2005, AOC told Capitol Preservation Commission representatives that it was still expecting the CVC base project to be substantially completed by September 15, 2006, but it proposed to postpone the facility’s opening for 3 months to provide time to finish testing CVC systems, complete punch-list work, and prepare for operating the facility. In our view, allowing some time to address unexpected problems is prudent. AOC’s and its contractors’ reassessment of activity durations in the August schedule may not be sufficiently rigorous to identify all those that are unrealistic. In reassessing the project’s schedule, the construction management contractor found some durations to be reasonable that we considered likely to be too optimistic. Recently, AOC’s sequence 2 and construction management contractors reported that, according to their reassessment, the durations for interior stonework were reasonable. We previously found that these durations were optimistic, and CVC project staff we interviewed in August likewise believed they were unrealistic. We have previously expressed concerns about a lack of sufficient or timely analysis and documentation of delays and their causes and determination of responsibility for the delays, and we recommended that AOC perform these functions more rigorously. We have not reassessed this area recently. However, given the project’s uncertain schedule, we believe that timely and rigorous analysis and documentation of delays and their causes and determination of responsibility for them are critical. We plan to reexamine this area again in the next few weeks. The uncertainty associated with the project’s construction schedule increases the importance of having a summary schedule that integrates the completion of construction with preparations for opening the facility to the public, as the Subcommittee has requested and we have recommended. Without such a schedule, it is difficult to determine whether all necessary activities have been identified and linked to provide for a smooth opening or whether CVC operations staff will be hired at an appropriate time. In early September, AOC gave a draft operations schedule to its construction management contractor to integrate into the construction schedule. As we noted in our July 14 testimony, AOC could incur additional costs for temporary work if it opens the CVC facility to the public before the construction of the House and Senate expansion spaces is substantially complete. As of last week, AOC’s contractors were still evaluating the construction schedule for the expansion spaces, and it was not clear what needs AOC would have for temporary work. The schedule, which we received in early September, shows December 2006 as the date for completing the construction of the expansion spaces. We have not yet assessed the likelihood of the contractor’s meeting this date. Finally, we are concerned about the capacity of the Capitol Power Plant (CPP) to provide adequately for cooling, dehumidifying, and heating the CVC facility during construction and when it opens to the public. Delays in completing CPP’s ongoing West Refrigeration Plant Expansion Project, the removal from service of two chillers because of refrigerant gas leaks, fire damage to a steam boiler, management issues, and the absence of a CPP director could potentially affect CPP’s ability to provide sufficient chilled water and steam for the CVC facility and other congressional buildings. These issues are discussed in greater detail in appendix III. Since the Subcommittee’s July 14 CVC hearing, we have discussed a number of actions with AOC officials that we believe are necessary to address problems with the project’s schedule and our concerns. AOC generally agreed with our suggestions, and a discussion of them and AOC’s responses follows. By October 31, 2005, work with all relevant stakeholders to reassess the entire project’s construction schedule, including the schedule for the House and Senate expansion spaces, to ensure that all key activities are included, their durations are realistic, their sequence and interrelationships are appropriate, and sufficient resources are shown to accomplish the work as scheduled. Specific activities that should be reassessed include testing, balancing, and commissioning the HVAC and filtration systems; testing the fire protection system; constructing the utility tunnel; installing the East Front mechanical (HVAC) system; installing interior stonework and completing finishing work (especially plaster work); fabricating and delivering interior bronze doors; and fitting out the gift shops. AOC agreed and has already asked its construction management and sequence 2 contractors to reassess the August schedule. AOC has also asked the sequence 2 contractor to show how it will recover time lost through delays. Carefully consider the costs, benefits, and risks associated with proposals to change the project’s scope, modify the quality of materials, or accelerate work, and ensure that appropriate management controls are in place to prevent or minimize any adverse effects of such actions. AOC agreed. It noted that the sequence 2 contractor had already begun to work additional hours to recover lost time on the utility tunnel. AOC also noted that its construction management contractor has an inspection process in place to identify problems with quality and has recently enhanced its efforts to oversee worker safety. Propose a CVC opening date to Congress that allows a reasonable amount of time between the completion of the base project’s construction and the CVC facility’s opening to address any likely problems that are not provided for in the construction schedule. The December 15, 2006, opening date that AOC proposed earlier this month would provide about 90 days between these milestones if AOC meets its September 15, 2006, target for substantial completion. However, we continue to believe that AOC will have difficulty meeting the September 15 target, and although the 90-day period is a significant step in the right direction, an even longer period is likely to be needed. Give priority attention to effectively implementing our previous recommendations that AOC (1) analyze and document delays and the reasons and responsibility for them on an ongoing basis and analyze the impact of scope changes and delays on the project’s schedule at least monthly and (2) advise Congress of any additional costs it expects to incur to accelerate work or perform temporary work to advance the CVC facility’s opening so Congress can weigh the advantages and disadvantages of such actions. AOC agreed. AOC is still updating its estimate of the cost to complete the CVC project, including the base project and the House and Senate expansion spaces. As a result, we have not yet had an opportunity to comprehensively update our November 2004 estimate that the project’s estimated cost at completion will likely be between $515.3 million without provision for risks and uncertainties and $559 million with provision for risks and uncertainties. Since November 2004, we have added about $10.3 million to our $515.3 million estimate to account for additional CVC design and construction work. (App. IV provides information on the project’s cost estimates since the original 1999 estimate.) However, our current $525.6 million estimate does not include costs that AOC may incur for delays beyond those delay costs included in our November 2004 estimate. Estimating the government’s costs for delays that occurred after November 2004 is difficult because it is unclear who ultimately will bear responsibility for various delays. Furthermore, AOC’s new estimates may cause us to make further revisions to our cost estimates. To date, about $528 million has been provided for CVC construction. (See app.V.) This amount does not include about $7.8 million that was made available for either CVC construction or operations. In late August, we and AOC found that duplicate funding had been provided for certain CVC construction work. Specifically, about $800,000 was provided in two separate funding sources for the same work. The House and Senate Committees on Appropriations were notified of this situation and AOC’s plan to address it. The funding that has been provided and that is potentially available for CVC construction covers the current estimated cost of the facility at completion and provides some funds for risks and uncertainties. However, if AOC encounters significant additional costs for delays or other changes, more funding may be needed. Because of the potential for coordination problems with a project as large and complex as CVC, we had recommended in July that AOC promptly designate responsibility for integrating the planning and budgeting for CVC construction and operations. In late August, AOC designated a CVC staff member to oversee both CVC construction and operations funding. AOC had also arranged for its operations planning consultant to develop an operations preparation schedule and for its CVC project executive and CVC construction management contractor to prepare an integrated construction and operations schedule. AOC has received a draft operations schedule and has given it to its construction management contractor to integrate into the construction schedule. Pending the hiring of an executive director for CVC, which AOC would like to occur by the end of January 2006, the Architect of the Capitol said he expects his Chief Administrative Officer, who is currently overseeing CVC operations planning, to work closely with the CVC project executive to integrate CVC construction and operations preparations. Work and costs could also be duplicated in areas where the responsibilities of AOC’s contractors overlap. For example, the contracts or planned modification for both AOC’s CVC construction design contractor and CVC operations contractor include work related to the gift shop’s design and wayfinding signage. We discussed the potential for duplication with AOC, and it agreed to work with its operations planning contractor to clarify the contractor’s scope of work, eliminate any duplication, and adjust the operations contract’s funding accordingly. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Bernard Ungar at (202) 512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, Michael Armes, John Craig, George Depaoli, Jr., Maria Edelstein, Elizabeth Eisenstadt, Brett Fallavollita, Jeanette Franzel, Jackie Hamilton, Bradley James, Scott Riback, and Kris Trueblood. With the assistance of a contractor, Hulett & Associates, we assessed the risks associated with the Architect of the Capitol’s (AOC) July 2005 schedule for the Capitol Visitor Center (CVC) project and used the results of our assessment to estimate a time frame for completing the base CVC project with and without identified risks and uncertainties. In August 2005, we and the contractor interviewed project managers and team members from AOC and its major CVC contractors, a representative from the Army Corps of Engineers, and AOC’s Chief Fire Marshal to determine the risks they saw in completing the remaining work and the time they considered necessary to finish the CVC project and open it to the public. Using the project’s July 2005 summary schedule (the most recent schedule available when we did our work), we asked the team members to estimate how many workdays would be needed to complete the remaining work. More specifically, for each summary-level activity that the members had a role or expertise in, we asked them to develop three estimates of the activity’s duration—the least, most likely, and longest time needed to complete the activity. We planned to estimate the base project’s most likely completion date without factoring in risks and uncertainties using the most likely activity durations estimated by the team members. In addition, using these three-point estimates and a simulation analysis to calculate different combinations of the team’s estimates that factored in identified risks and uncertainties, we planned to estimate completion dates for the base project at various confidence levels. In August 2005, AOC’s construction management and sequence 2 contractors were updating the July project schedule to integrate the construction schedule for the House and Senate expansion spaces, reflect recent progress and problems, and incorporate the results to date of their reassessment of the time needed for testing, balancing, and commissioning the heating, ventilation and air-conditioning, (HVAC) system and for fire alarm testing. This reassessment was being done partly to implement a recommendation we had made to AOC after assessing the project’s schedule in early 2004 and finding that the scheduled durations for these and other activities were optimistic. AOC’s construction management and sequence 2 contractors found that key detailed activities associated with the HVAC system had not been included in the schedule and that the durations for a number of activities were not realistic. Taking all of these factors into account, AOC’s contractors revised the project’s schedule in August. AOC believes that the revised schedule, which shows the base project’s completion date slipping by several months, allows too much time for the identified problems. As a result of this problem and others we brought to AOC’s attention, AOC has asked its contractors to reassess the schedule. AOC’s construction management contractor believes that such a reassessment could take up to 2 months. In our opinion, there are too many uncertainties associated with the base project’s schedule to develop reliable estimates of specific completion dates, with or without provisions for risks and uncertainties. These activities are not critical. All other activities were critical in the April schedule or became critical in subsequent schedules. Several issues could affect the capacity of the Capitol Power Plant (CPP) to provide sufficient chilled water and steam for the CVC facility and other congressional buildings. CPP produces chilled water for cooling and dehumidification and steam for heating Capitol Hill buildings. To accommodate the CVC facility and meet other needs, CPP has been increasing its production capacity through the West Refrigeration Plant Expansion Project. This project, which was scheduled for completion in time to provide chilled water for the CVC facility during construction and when it opened, has been delayed. In addition, problems with aging equipment, fire damage, management weaknesses, and a leadership vacancy could affect CPP’s ability to provide chilled water and steam. More specifically: In July, two chillers in CPP’s East Refrigeration Plant were taken out of service because of a significant refrigerant gas leak. The refrigerant, whose use is being phased out nationally, escaped into the surrounding environment. Because of the chillers’ age and use of an outdated refrigerant, AOC has determined that it would not be cost-effective to repair the chillers. CPP’s chilled water production capacity will be further reduced between December 1, 2005, and March 15, 2006, when the West Refrigeration Plant is to be shut down to enable newly installed equipment to be connected to the existing chilled water system. However, the remainder of CPP’s East Refrigeration Plant is to remain operational during this time, and AOC expects that the East Refrigeration Plant will have sufficient capacity to meet the lower wintertime cooling demands. Additionally, CPP representatives indicated that they could bring the West Refrigeration Plant back online to provide additional cooling capacity in an emergency. CPP is developing a cost estimate for this option. In June, one of two CPP boilers that burn coal to generate steam was damaged by fire. According to a CPP incident report, CPP operator errors contributed to the incident and subsequent damage. Both boilers were taken off-line for scheduled maintenance between July 1 and September 15, and CPP expects both boilers to be back online by September 30, thereby enabling CPP to provide steam to CVC when it is needed. Several management issues at CPP could further affect the expansion plant’s and CPP’s operational readiness: CPP has not yet developed a plan for staffing and operating the entire plant after the West Refrigeration Plant becomes operational or contracted for its current staff to receive adequate training to operate the West Refrigeration Plant’s new, much more modern equipment. CPP has not yet received a comprehensive commissioning plan from its contractor. A number of procurement issues associated with the plant expansion project have arisen. We are reviewing these issues. CPP has been without a director since May 2005, when the former director resigned. CPP is important to the functioning of Congress, and strong leadership is needed to oversee the completion of the expansion project and the integration, commissioning, and operation of the new equipment, as well as address the operational and management problems at the plant. Filling the director position with an experienced manager who is also an expert in the production of steam and chilled water is essential. AOC recently initiated the recruitment process. House and Senate expansion spaces Air filtration system funded by Dep’t. of Defense (DOD) Bid prices exceeding estimates, preconstruction costs exceeding budgeted costs, unforeseen field conditions, Other factors (costs associated with delays and design-to-budget overruns) Project budget after increases (as of November 2004) GAO-projected costs to complete after proposed scope changes (as of June 2005, excluding risks and uncertainties) Additional cost-to-complete items (as of August 2005) Design of the Library of Congress tunnel (Funds from Capitol Preservation Fund) GAO-projected costs to complete (as of August 2005, excluding risks and uncertainties) Potential additional costs associated with risks and uncertainties (as of November 2004) Less: Risks and uncertainties GAO believes the project faced in November 2004 [Congressional seals, orientation film, and backpack storage space ($4.2) + US Capitol Police securitymonitoring ($3.0)] (7.2) Less: Additional cost-to-complete items (as of August 2005) (3.1) The five additional scope items are the House connector tunnel, the East Front elevator extension, the Library of Congress tunnel, temporary operations, and enhanced perimeter security. Base project (as of November 2004) US Capitol Police security monitoring Current funding provided (as of June 2005) Design of Library of Congress tunnel (funds from the Capitol Preservation Fund) Construction-related funding provided in operations obligation plan: Construction-related funding provided in operations Current funding provided (as of August 2005) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses progress on the Capitol Visitor Center (CVC) project. Our remarks will focus on (1) the Architect of the Capitol's (AOC) progress in managing the project's schedule since the Subcommittee on the Legislative Branch, Senate Committee on Appropriations' July 14 hearing on the project; (2) our estimate of a general time frame for completing the base project's construction and the preliminary results of our assessment of the risks associated with AOC's July 2005 schedule for the base project; and (3) the project's costs and funding, including the potential impact of scheduling issues on cost. However, we will not, as originally planned, provide specific estimated completion dates because AOC's contractors revised the schedule in August to reflect recent delays, but AOC has not yet evaluated the revised schedule. AOC believes that the time added to the schedule by its contractors is unreasonable. Until AOC completes its evaluation and we assess it, any estimates of specific completion dates are, in our view, tentative and preliminary. Similarly, we will wait until the schedule is stabilized to update our November 2004 estimate of the cost to complete the project. Currently, AOC and its consultant, McDonough Bolyard Peck (MBP), are still developing their cost-to-complete estimates. In summary, although AOC and its construction contractors have continued to make progress since the Subcommittee's July 14 CVC hearing, several delays have occurred and more are expected. These delays could postpone the base project's completion significantly beyond September 15, 2006, the date targeted in AOC's July 2005 schedule. Although not yet fully reviewed and accepted by AOC, the schedule that AOC's contractors revised in August 2005 shows February 26, 2007, as the base project's completion date. According to our preliminary analysis of the project's July 2005 schedule, the base project is more likely to be completed sometime in the spring or summer of 2007 than by September 15, 2006. Unless the project's scope is changed or extraordinary actions are taken, the base project is likely to be completed later than September 15, 2006, for the reasons cited by the contractors and for other reasons, such as the optimistic durations estimated for a number of activities and the risks and uncertainties facing the project. AOC believes that the contractors added too much time to the schedule in August for activities not included in the schedule and that it can expedite the project by working concurrently rather than sequentially and by taking other actions. Additionally, we are concerned about actions that have been, or could be, proposed to accelerate work to meet the September 15, 2006, target date. The project's schedule also raises a number of management concerns, including the potential for delays caused by not allowing enough time to address potential problems or to complete critical activities. Fiscal year 2006 appropriations have provided sufficient funds to cover AOC's request for CVC construction funding as well as additional funds for some risks and uncertainties that may arise, such as costs associated with additional sequence 2 delays or unexpected conditions. Although sequence 2 delays have been occurring, the extent to which the government is responsible for their related costs is not clear at this time. Additional funding may be necessary if the government is responsible for significant delay-related costs or if significant changes are made to the project's design or scope or to address unexpected conditions. In addition, we and AOC identified some CVC construction activities that received duplicate funding. AOC has discussed this issue with the House and Senate Appropriations Committees.
Mobilization is the process of assembling and organizing personnel and equipment, activating or federalizing the reserve component, and bringing the armed forces to a state of readiness for war or other national emergency. It is a complex undertaking that requires constant and precise coordination among a number of commands and officials. Mobilization usually begins with the President invoking a mobilization authority and ends with the mobilization of an individual Reserve or National Guard member. There are seven reserve components: the Army Reserve, Army National Guard, Air Force Reserve, Air National Guard, Naval Reserve, Marine Corps Reserve, and Coast Guard Reserve. Reserve forces can be divided into three major categories: the Ready Reserve, the Standby Reserve, and the Retired Reserve. The Ready Reserve had approximately 1.2 million Guard and Reserve members at the end of fiscal year 2002, and its members were the only reservists who were subject to mobilization under the partial mobilization declared by President Bush on September 14, 2001. Within the Ready Reserve, there are three subcategories: the Selected Reserve, the IRR, and the Inactive National Guard. Members of all three subcategories are subject to mobilization under a partial mobilization. In fiscal year 2002, the Selected Reserve had 882,142 members. Members of the Selected Reserve are all the personnel who are active members of the National Guard or Reserve units who participate in regularly scheduled training. As a result, they draw regular pay for their reserve service. It also includes individual mobilization augmentees—individuals who train regularly, for pay with active component units. In fiscal year 2002, the IRR had 314,037 members. During a partial mobilization these individuals, who were previously trained during periods of active duty service, can be mobilized to fill requirements. Each year, the services transfer thousands of personnel who have completed the active duty or Selected Reserve portions of their military contracts, but who have not reached the end of their minimum service obligations, to the IRR. However, IRR members do not participate in any regularly scheduled training, and they are not paid for their membership in the IRR. In fiscal year 2002, the Inactive National Guard had 3,142 Army National Guard members. This subcategory contains individuals who are temporarily unable to participate in regular training but who wish to remain attached to their National Guard units. These individuals were not subject to mobilization prior to the declaration of a partial mobilization on September 14, 2001. Most reservists who were recalled to active duty for other than normal training after September 11, 2001, were mobilized under one of the three authorities listed in table 1. DOD had the authority to use section 12304, the Presidential Reserve Call- Up authority, to mobilize reservists in support of contingency operations in Bosnia, Kosovo, and Southwest Asia prior to September 11, 2001. It continued to use this authority to mobilize reservists for ongoing operations in these areas even after the partial mobilization authority (section 12302) was invoked on September 14, 2001. The partial mobilization authority has been used to support both domestic and overseas missions related to the global war on terrorism, including the operations in Afghanistan and Iraq. After invoking section 12302 on September 14, 2001, the President delegated his mobilization authority to the Secretary of Defense and the Secretary of Transportation. The Secretary of Defense further delegated this authority to the service secretaries and allowed them to delegate the authority to any civilian official who was appointed by the President and confirmed by the Senate. When the Secretary of Defense delegated his authority, he set limits on the numbers of personnel that the services could mobilize. On September 14, 2001, the Secretary of Defense assigned the Army a mobilization cap of 10,000 personnel; the Navy a cap of 3,000; the Marine Corps a cap of 7,500; and the Air Force a cap of 13,000, for a total cap of 33,500. The caps were raised several times, but in aggregate they have remained below 300,000 since they were first established. Since September 11, 2001, the services have also made extensive use of their section 12301(d) authority. This authority can involve complicated administrative processing because reservists must volunteer to be activated, and individuals who are brought on to active duty under this authority have varying starting and ending dates. However, this authority provides flexibility that is advantageous to both individual reservists members and the services. The reservists can schedule their active duty periods around family and work responsibilities, and the services are not constrained by the numerical caps and time limitations of other mobilization authorities. As figure 1 indicates, mobilization is a decentralized process that requires the collaboration of many organizations throughout DOD. The mobilization process typically begins with the component commanders, who are responsible for commanding their services’ active and reserve forces within a combatant commander’s area of responsibility. The component commanders identify requirements for wars or contingency operations within their areas of responsibility and submit the requirements to the combatant commanders. The combatant commanders, who have responsibility and operational control over forces from two or more services, consolidate the requirements from their component commanders and develop “requests for forces” (RFF). Each RFF generally identifies the mission, along with the equipment, personnel, units, types of units, or general capabilities that are necessary to carry out the mission. RFFs may be very detailed or very general, depending on the nature of the mission. Furthermore, RFFs typically contain requirements that must be filled by more than one service. The combatant commanders send RFFs to the Chairman of the Joint Chiefs of Staff, who is the principal military advisor to the President and the Secretary of Defense on mobilization matters. The Joint Staff validates and prioritizes requirements from the combatant commanders and then sends draft deployment orders via E-mail to the supporting commanders, who will supply forces or equipment. The Chairman of the Joint Chiefs of Staff considers (1) the assessments of the service headquarters, reserve component commanders, and supporting combatant commanders; (2) input from his own staff; and (3) the technical advice, legal opinions, and policies provided by OSD. The Chairman then makes a recommendation to the Secretary of Defense concerning the timing of mobilizations and the units or individuals to be mobilized. When the Secretary of Defense completes his review of the validated RFF and is satisfied with the mobilization justification, he authorizes the deployment of forces, and the Chairman of the Joint Chiefs of Staff issues a deployment order. The services then review the approved requirements on the deployment order and coordinate with applicable force providers and reserve component headquarters to check the readiness of the units that had been projected to fill the requirements. If necessary, units or individuals may be identified to substitute for, or augment, the units and individuals that were originally projected. When the units or individuals are firmly identified for mobilization, the assistant secretaries of the military departments who have responsibilities for manpower and reserve affairs issues approve the mobilization packages. Finally, the services issue mobilization orders to units and individuals. These orders state where and when to report for duty, as well as the length of duty. In September 2001, the Office of the Under Secretary of Defense (Personnel and Readiness), which is responsible for developing the policies, plans, and programs to manage the readiness of both active and reserve forces, issued a memorandum containing specific mobilization guidance. This guidance instructed the military departments to write mobilization orders for 1 year but allowed the service secretaries the option of extending mobilizations for a second year. In subsequent mobilization guidance, issued in January, March, and July 2002, the Under Secretary instructed the services to use volunteers to the maximum extent possible, so that involuntary mobilizations would be minimized. In conjunction with the services, the Office of the Assistant Secretary of Defense for Reserve Affairs, which has overall responsibility for reserve policies and procedures within DOD, set a goal to provide reservists with 30 days notice prior to mobilization, when operationally feasible. The services took different approaches when alerting their reservists prior to mobilization. The Army took the most formal approach and attempted to provide its reservists with official orders 30 days prior to their mobilization dates. The other services took less formal approaches and tried to notify reservists of impending mobilizations and deployments when requirements were identified or validated, or at some other key point in the mobilization process. According to DOD officials, the mobilization process—from the time a requirement is generated until the time that a reservist reports to a mobilization site to fill that requirement—can take anywhere from 1 day to several months, but it normally takes several weeks. Based on our observations at mobilization processing sites and discussions with mobilization officials, we found that most reservists were able to complete their required briefings, screenings, and administrative functions within 24 to 96 hours after reaching their mobilization sites. However, some reservists required lengthy postmobilization training before they were able to deploy. Unreliable and inconsistent data make it difficult to quantify the exact change in the tempo of reserve operations since September 11, 2001. Officials from the Office of the Assistant Secretary of Defense for Reserve Affairs have characterized mobilization data from the early days and weeks following September 11 as questionable. In addition, because reservists can perform a wide variety of sometimes-overlapping training and operational missions, in a variety of voluntary or involuntary duty statuses, mobilization data have been captured differently over time. For example, because the state governors mobilized large numbers of National Guard troops to provide security at their civilian airports, DOD’s mobilization figures for most of 2002 included state active duty figures as well as figures for federal mobilizations. However, state active duty was dropped from DOD’s mobilization figures after the National Guard moved out of the last civilian airport in September 2002. It is also difficult to fully capture increases in reserve tempos because mobilization figures that are based strictly on section 12302 partial mobilization orders ignore the major contributions of reserve volunteers, some of whom are serving lengthy tours under section 12301(d) orders. Despite the identified data challenges, figure 2 uses consistently reported data to demonstrate that reserve mobilizations have not dipped below 50,000 during any week since January 2002. Figure 2 also shows the dramatic increase in mobilizations that began in January 2003 to support operations in Iraq. Figures 3 and 4 show the mobilizations of each of the services between January 2002 and July 2003. Figure 3 shows that between January 2003 and July 2003, the Army had more reservists mobilized than did all the other services combined. However, figure 4 shows that the mobilizations were most wide reaching within the Coast Guard, which had more than one-third of its Ready Reserve forces mobilized during April 2003. Previously, we reported on several issues surrounding the increased use of reserve forces. Our June 2002 report noted that maintaining employers’ continued support for their reservist employees will be critical if DOD is to retain experienced reservists in these times of longer and more frequent deployments. We assessed the relations between reservists and their civilian employers, focusing specifically on DOD’s outreach efforts designed to improve these important relationships. We found that many employers we surveyed were not receiving adequate advance notice prior to their reservist employees’ departure for military duty. We reported that in spite of repeated memoranda from the Assistant Secretary of Defense for Reserve Affairs, advance notification continued to be a problem and that the services had not consistently met the 30-day advance notification goal. We recommended that the Secretary of Defense direct the services to determine how many orders are not being issued 30 days in advance of deployments and why, and then take the necessary corrective actions toward fuller compliance with the goal. DOD agreed with the merit to studying why the reserve components miss the 30-day goal. Citing the increased use of the reserves to support military operations, House Report 107-436 accompanying the Fiscal Year 2003 National Defense Authorization Act directed us to review compensation and benefit programs for reservists serving on active duty. In response, we are reviewing (1) income protection for reservists called to active duty, (2) family support programs, and (3) health care access. In March 2003, we testified before the Subcommittee on Total Force, Committee on Armed Services, House of Representatives, on our preliminary observations related to this work. During the 1990-1991 Persian Gulf War, health problems prevented the deployment of a significant number of Army reservists. To help correct this problem the Congress passed legislation that required reservists to undergo periodic physical and dental examinations. The National Defense Authorization Act for 2002 directed us to review the value and advisability of providing examinations. We also examined whether the Army is collecting and maintaining information on reservists’ health. In April 2003, we reported that without adequate examinations, the Army may train, support, and mobilize reservists who are unfit for duty. Further, the Army had not consistently carried out the statutory requirements for monitoring the health and dental status of Army early deploying reservists. At the early deploying units we visited, approximately 66 percent of the medical records were available for review. We found that about 68 percent of the required physical examinations for those over age 40 had not been performed and that none of the annual medical certificates required of reservists had been completed by reservists and reviewed by the units. We recommended that the Secretary of Defense ensure that for early deploying reservists the required physical examinations, annual medical certificates, and annual dental examinations be completed. DOD concurred with our recommendations. DOD did not follow its existing operation plans after the events of September 11, 2001, to mobilize nearly 300,000 reservists. DOD’s traditional mobilization process relies on requirements from operation plans that have been coordinated with key mobilization officials prior to the start of the mobilization process. The operation plans in existence on September 11, 2001, did not include all the requirements that were needed to respond to the domestic terrorist threat. Overseas operation plans did not focus on terrorist threats or the uncertain political environment in southwest Asia. Nor did operation plans adequately address the increasing requirements for individuals and small, tailored task forces. Because DOD could not rely on existing operation plans to guide its mobilizations, it used a modified mobilization process that was slower than the traditional mobilization process. DOD has called about 300,000 of the 1.2 million National Guard and Reserve personnel to active duty since September 2001. These reservists fought on the front lines in Iraq; tracked down Taliban and al Qaeda members throughout Asia and Africa; maintained the peace in the Balkans, Afghanistan, and now Iraq; and participated in domestic missions ranging from providing security at airports and at the Salt Lake City Olympics to fighting drug trafficking and providing disaster relief. With many of these missions—including those associated with the global war on terrorism— expected to continue, reserve force mobilizations are likely to persist for the foreseeable future. DOD recognized before September 11, 2001, that no significant operation could be conducted without reserve involvement. DOD’s mobilization process was designed to mobilize reservists based on the execution of combatant commander operation plans and a preplanned flow of forces. As a result, the mobilization process operates most efficiently when operation plans accurately and completely capture mobilization requirements. However, since DOD develops its operation plans using a deliberate planning process that involves input and coordination from OSD, the Joint Staff, and the services, the process can take years, and operation plans have not been quick to respond to changes in the threat environment. Prior to the events of September 11, 2001, we issued a number of reports highlighting the need for effective U.S. efforts to combat terrorism domestically and abroad. For example, we recommended that the federal government conduct multidisciplinary and analytically sound threat and risk assessments to define and prioritize requirements and properly focus programs and investments in combating terrorism. Threat and risk assessments are decision-making support tools that are used to establish requirements and prioritize program investments. DOD uses a variation of this approach. We also reported on DOD’s use of a risk-assessment model to evaluate force protection security requirements for mass casualty terrorists’ incidents at DOD military bases. While DOD’s goal is to conduct mobilizations based on operation plans developed through a deliberate planning process, the department recognizes that during the initial stages of an emergency it may have to resort to a crisis action response rather than adhering to its operation plans. This is particularly true if the emergency had not been anticipated. During such crisis response periods, DOD can use a variety of authorities to position its forces where they are needed. For example, following the events of September 11, 2001, DOD used voluntary orders and other available means to get and keep reservists on active duty. As of November 8, 2001, almost 40,000 reservists had been mobilized under the partial mobilization authority for the global war on terrorism, but almost 19,000 reservists were on active duty and positioned where they were needed under other federal authorities. By comparison, more than 53,000 reservists were mobilized under the partial mobilization authority for the global war on terrorism on December 3, 2002, but the reservists on active duty under other federal authorities had dropped to less than 5,000. When DOD moved beyond its crisis action response to the events of September 11, 2001, it was not able to rely on operation plans to guide its mobilizations because operation plans did not contain requirements to address the domestic response to the terrorist threat. According to senior DOD officials, when terrorists crashed planes into the Pentagon, the World Trade Center, and a field in Pennsylvania on September 11, 2001, none of DOD’s operation plans contained requirements for National Guard troops to deploy to the nation’s civilian airports. In September 2001, we reported that some threats are difficult, if not impossible, to predict. Therefore, an effective antiterrorism program that can reduce vulnerabilities to such attacks is an important aspect of military operations. We also reported that the effectiveness of the DOD antiterrorism program was becoming an important aspect of military operations. However, the effectiveness of the program had been limited because DOD had not (1) assessed vulnerabilities at all installations, (2) systematically prioritized resource requirements, and (3) developed a complete assessment of potential threats. DOD has been taking steps to improve the program. Despite the lack of airport security requirements in operation plans, between November 2001 and April 2002, an average of approximately 7,500 National Guard members were mobilized at the nation’s civilian airports. During the same period, an average of almost 1,900 National Guard members were on state active duty, many to provide security at other key infrastructure sites such as tunnels, bridges, and nuclear power plants. According to senior Air Force officials, none of the operation plans that existed on September 11, 2001, contained requirements for the extended use of Guard and Reserve members to fly combat air patrols over the nation’s capital and major cities. Yet, reservists were performing that mission on September 11, 2001, and they continue to support the combat air patrol mission, particularly when the national threat level is raised. According to DOD officials, preexisting service mobilization plans called for Guard and Reserve forces to move to active duty bases and provide security at those bases after the active forces had departed from the bases. However, after September 11, many Guard and Reserve members were on active duty (voluntarily and involuntarily) at active and reserve bases and were filling security requirements that were not in any operation plan. For example, even while active forces remained, two selected Marine Corps battalions were mobilized for approximately 12 months—one at Camp Lejeune, North Carolina, and one at Camp Pendleton, California—to quickly respond to any additional terrorist attacks within the United States. In addition, the Air Force had to unexpectedly bring reservists on active duty to provide security for their reserve bases after September 11. In particular, Air National Guard security forces were needed to provide security at bases from which the Guard was flying combat air patrol missions. According to DOD officials, requirements in overseas operation plans focused on traditional operations against national military forces, rather than on tracking terrorists throughout Afghanistan and around the globe. For several years, defense planning guidance had been formulated around the concept that the military had to be ready to fight and win two major theater wars, generally viewed as one in southwest Asia and one on the Korean peninsula. According to DOD officials, operation plans for these areas focused on the threats posed by rogue countries. Moreover, even after defense planning guidance had begun to indicate a need for the military to be capability based rather than threat based, operation plans continued to focus on conventional adversaries. According to DOD officials, some of the mobilizations that took place in support of Operation Iraqi Freedom followed the order and timing established in the relevant operation plan and its associated time-phased force deployment and data file. However, the order and timing of other mobilizations changed due to the tenuous political environment and uncertainties concerning coalition partnerships and access to airspaces, as well as access to bases in Turkey, Oman, and Saudi Arabia. Access-to-base issues had also arisen during the 1991 Persian Gulf War. According to DOD officials, the combatant commanders’ requests for small, tailored task forces and individuals have been increasing since September 11, 2001, but the requirements for these small groups and individuals have not been fully addressed in the combatant commanders’ existing operation plans. Mobilization statistics demonstrate the large numbers of small groups and individuals that have been mobilized recently. For example, a DOD report showed that on March 5, 2003, the services had thousands of reservists mobilized as parts of small units or as individuals. The Navy had 266 one-person and 152 two-person units mobilized, and the Army also had hundreds of one-and two-person units mobilized. The Marine Corps strives to keeps its units intact, and Marine Corps policy states that detachments must consist of at least two people, but the Marine Corps had 24 two-person and 22 three-person units mobilized. The Air Force had just 6 units with less than 20 people mobilized on that date. However, the services also had 12,682 individual augmentees mobilized on March 5, 2003—1,438 of them from the Air Force’s two reserve components. After September 11, 2001, DOD used a modified mobilization process because existing operation plans had not adequately addressed mobilization requirements and changing priorities. The modified process was able to respond to changing priorities and new requirements. However, because key mobilization officials did not have a lengthy deliberate planning period to discuss these new requirements and changing priorities, coordination had to take place during the mobilization process, thus lengthening the process. Under the modified process, close to two dozen approvals are needed to mobilize one unit or individual. A contractor study conducted for the Army Operations Office looked at how long it took from the time the U.S. Central Command issued a RFF until the time a deployment order was issued. Preliminary results showed that the monthly averages from February through June 2002 ranged from 18 to 19 days for this portion of the mobilization process. Coordination was much more difficult under the modified process due to the large number of deployment orders. For example, under the modified process, the Secretary of Defense signed 246 deployment orders to mobilize over 280,000 reservists between September 11, 2001, and May 21, 2003, compared to the less than 10 deployment orders needed to mobilize over 220,000 reservists during the 1991 Gulf War. The longer modified mobilization process is less efficient than the traditional process primarily because it relies on additional management oversight and multiple layers of coordination between the services, OSD, and the Joint Staff during the validating, approving, and filling of mobilization requirements. Many of these factors are detailed in the sections below. DOD officials did not have visibility over the entire mobilization process primarily because DOD lacked adequate systems for tracking personnel and other resources. First, DOD’s primary automated readiness reporting system could not adequately track the personnel and other resources within the small units that were frequently needed by combatant commanders. Second, some systems used by the active and reserve components to track personnel were incompatible. In addition, outdated mobilization guidance led to communication and coordination problems amongst the components. DOD officials had limited visibility over the readiness of the entire force because DOD’s primary readiness reporting data system tracked the readiness only of large units and not the readiness of resources within the small units that made up the larger reporting units. These smaller units were often sufficient to meet the combatant commanders’ requirements for the small, tailored units that were frequently requested after September 11, 2001. Because DOD officials did not have quick access to readiness information of these small units, they had to coordinate with reserve headquarters officials and, in some cases, the individual units themselves to obtain the readiness information needed to determine which unit would be best able to fill the combatant commanders’ requirements. The Global Status of Resources and Training System (GSORTS) is DOD’s single automated system for reporting the readiness of all operational units within the U.S. armed forces. It does not function as a detailed management information system, but it does provide broad information on selected readiness indicators and include a commander’s assessment of the unit’s ability to undertake the missions for which the unit was organized or designed. Units provide readiness reports to a central site where the data are processed and stored and then distributed to decision makers. The information in the system is supposed to support crisis response planning as well as deliberate planning. However, the services are only required to register forces that are included in operation plans or other war-planning documents. Generally, all large units report their readiness in the system. However, resources within the units are not necessarily reported. For example, GSORTS could show that a specific unit is not ready to perform its mission, but fail to capture information that would indicate that some of the personnel and equipment within the unit are capable of performing their mission. Such information would benefit the services in their efforts to assemble the forces needed to meet joint organizational requirements. Because the Air Force combined various capabilities into nontraditional force groups in support of its Aerospace Expeditionary Force, it recognized the need to report readiness for small “building block” units that could be combined to provide the needed capabilities. As a result, the Air Force developed its own readiness reporting system that reported the readiness of more than 67,000 units in January 2003. The Army and the Navy do not report readiness at this small unit level. Consequently, when the combatant commanders submit RFFs that do not coincide with the forces that are reported in GSORTS, the decision makers within the services must coordinate with active and reserve component commanders to determine the readiness of the forces that would be available to fill the requested requirements. DOD officials also lost visibility over the mobilization of reservists because some active and reserve component personnel tracking systems were not compatible. Some components within the respective services maintain personnel data in their own data systems for different purposes. In those cases, both the active and reserve components require data that are provided only in the other’s data systems. Yet, in some cases, active and reserve component systems were not always compatible with each other, resulting in cumbersome workarounds or extensive ad hoc coordination between active and reserve officials, and, according to DOD officials, the sometimes outright loss of visibility over the length of reservists’ mobilization or deployment status. The reserve and active components within some of the respective services maintain personnel data for different purposes. The individual reserve components maintain the mobilization data in their respective systems in order to track and maintain visibility over reservists’ physical location and mobilization status. The reserve systems also maintain information on reservists’ mobilization dates. Active components’ systems maintain personnel data for forces that are under their control. Using a variety of data systems, the active components track such information as the number of personnel, the units to which the personnel are attached, and the location of the unit. However, the active components cannot always discern between the regular active and mobilized reserve servicemembers in their data systems. The services’ active and reserve components have developed their respective computerized systems to track their personnel data, but they are often unable to directly transfer information and data between their systems. Often, these systems do not report information in a standardized format and are not integrated with each other. For example, while most of the services provide DOD with unclassified mobilization data, some services provide classified mobilization data. DOD must then aggregate selected unclassified information on a separate computer file that can be used to produce a single consolidated mobilization report. The incompatibilities between some active and reserve component data systems required mobilization officials to develop workarounds to acquire the information needed. Air Force officials cited the lack of a central automated system to manage and track mobilized reservists as a major problem that required extensive coordination between active and reserve components. Some components, like the Air National Guard and the Air Force Reserve, developed their own mobilization reporting systems to track the location and status of their reservists using computer spreadsheets. The use of local, nonintegrated data systems also affects the validity of some mobilization data. For example, we requested mobilization data from the Army Reserve on several occasions during our review, but Army Reserve officials cautioned us concerning the use of figures from their computerized database. They stated that the figures were unreliable and conflicted with the overall number of personnel they thought had been mobilized. Without an automated means for quickly and reliably capturing mobilization data, the Army has had to rely on a slow mobilization process that requires constant coordination between active and reserve component officials. The coordination between active and reserve component officials within the Army and the Navy often takes the form of relatively inefficient methods to determine the status of mobilized reservists. For example, in the initial months following September 11, 2001, the Navy had no automated means to track reservists from their home stations to their gaining commands. The entire mobilization process was based on paper, telephone calls, faxes, and e-mail messages. The lack of compatibility between automated data systems, and the sometimes cumbersome workarounds undertaken by the services to obtain reservists’ information, has at times led to the outright loss of DOD visibility over the length of reservists’ mobilization or deployment status and resulted in cases where reservists were inadvertently deployed beyond the original year specified in their orders. Additionally, Air Force officials told us that their major commands have had trouble filling new requirements because they cannot consistently determine who has volunteered and who is already serving on active duty. Because of limited visibility, some Navy processing personnel did not know in advance which reservists had been ordered to their mobilization processing sites or when the reservists were expected to report. Air Force officials said that they either totally lost or had diminished visibility over their reservists once they were mobilized and assigned to active commands. Reserve component officials from the Air Force said that a tracking system does not exist to effectively monitor reservists from the time they are mobilized and assigned to an active command to the time they are demobilized and return to their normal reserve status. As a result, reservists were deployed beyond their scheduled return dates and were not able to take the leave to which they were entitled prior to the expiration of their orders. Reserve officials said that this happened because replacement personnel had not arrived in time to relieve the reservists and the active commands were not willing to send the deployed reservists home until replacements had arrived. In many cases, Air Force reserve component headquarters said they did not have visibility over the replacement personnel because these personnel were coming from active component units. The Army experienced situations where the lack of visibility contributed to the breaking of service policies. During the current partial mobilization, the Assistant Secretary of the Army (Manpower and Reserve Affairs) issued a verbal policy that stated that units were not to be placed on alert for more than 90 days. The Army’s force providers were to review the list of units on alert each month and determine whether the units needed to remain on alert. If the force providers needed to keep any units on alert beyond 90 days, they could request an extension from the Assistant Secretary of the Army (Manpower and Reserve Affairs). Table 2 shows that on March 28, 2003, 204 units had been on alert for more than 90 days and that 12 units—representing hundreds of Guard and Reserve members—had been on alert for more than a year. The Assistant Secretary of the Army (Manpower and Reserve Affairs) told us that he was not aware that the 12 units had been on alert for more than a year. He worked to resolve this matter as soon as we brought it to his attention. Some service components developed their own systems to gain visibility over their mobilized reservists. For example, the Navy adapted a system from the Marine Corps in February 2003 that provides all Navy mobilization officials with the capability to track reservists throughout the mobilization process. Commands now have visibility over the entire mobilization process and can monitor the status of reservists en route to their commands, including the reservists’ current locations. Since implementing this system, the Navy has processed more than 8,000 mobilization orders and 6,000 demobilization orders. The Marine Corps implemented its system in 1994 to provide visibility over its reserve forces. This local area network-based system supports the continuous processing and tracking of newly mobilized Marines. However, this system is not integrated with the Navy’s system, and data cannot be exchanged between the two systems. As a result, the Navy is not automatically made aware of requirements for Navy medical, religious, or other support personnel who are embedded in Marine Corps units, when the associated Marine Corps units are mobilized. Finally, key DOD and service guidance—including mobilization instructions and publications—had not been updated in all instances to reflect the modified mobilization process, leading to failures in communication and coordination between components and further reducing officials’ visibility over the mobilization process. In some instances where DOD and the services did draft updated guidance to reflect the modified mobilization process, it was not clear to all mobilization officials which guidance to follow. The lack of updated guidance and the appearance of conflicting guidance resulted in situations where the components were not effectively coordinating and communicating their mobilization efforts with each other. OSD and the Joint Staff provide guidance and instructions on the mobilization policy, roles and responsibilities of mobilization officials, and mobilization planning and execution. Similar guidance and instructions are provided by the respective services for planning and executing mobilization within their respective commands. However, some of DOD’s guidance failed to clearly identify the steps of the modified mobilization process, the roles and responsibilities of mobilization officials, and the flow of information. While the Under Secretary of Defense (Personnel and Readiness) has issued several mobilization guidance memorandums since September 11, 2001, many of DOD’s key mobilization instructions, directives, and publications have not been updated to reflect current changes to the mobilization process. For example, DOD’s “Wartime Manpower Mobilization Planning Policies and Procedures” instruction has not been updated since 1986; DOD’s “Activation, Mobilization, and Demobilization of the Ready Reserve” directive was last updated in 1995; and DOD’s “Management of the Individual Ready Reserve (IRR) and the Inactive National Guard (ING)” directive was last updated in 1997. In addition, the Joint Staff had not updated its key mobilization guidance. The “Joint Doctrine for Mobilization Planning” publication was under revision when we completed our review, but the update to the 1995 publication had not yet been released. Within the Air Force, the lack of clear and consolidated guidance hindered the mobilization process. The service’s mobilization guidance was issued in 1994, and although several draft revisions to this guidance have been circulated since September 11, 2001, the guidance has yet to be officially updated. Officials in both the Air National Guard and the Air Force Reserve told us that they did not know whether they were supposed to follow the old “official” instruction or the revised (but unsigned) instructions. The lack of clear guidance led to situations where Air National Guard units had been mobilized without the knowledge of the Air National Guard headquarters’ crisis action teams, consisting of officials responsible for matching requirements with available units and personnel. For example, on February 22, 2003, the Air Mobility Command mobilized the 163rd Air Refueling Wing at the March Air Reserve Base. When we contacted the Air National Guard crisis action team 3 days later, the team was unaware that the 163rd had been mobilized. According to a senior level Air National Guard official, the Air Mobility Command had bypassed the Guard’s crisis action team and directly notified the unit of the mobilization. According to this official, the Guard’s crisis action team had been bypassed on mobilizations directed by both the Air Mobility Command and the Air Combat Command. The lack of clear guidance for mobilizing reservists also slowed down the Army’s mobilization process. On October 24, 2001, the Army issued guidance on the mobilization process. However, according to senior Army policy officials, the Army’s initial personnel replacement policy was unclear. This led to cases where the Army Reserve would send a request for a requirement to fill an empty position through the entire mobilization process rather than simply attempt to fill the position with another qualified individual. Between September 2001 and June 2002, the Army Reserve submitted 567 requests for just one individual because the initial person selected could not fill the position. These requests slowed down the mobilization process as each request was reviewed. The Army recently drafted a policy to clarify its replacement procedures. The Navy’s failure to update its guidance on the delegation of mobilization authority led to a redundancy of efforts. In June 2002, the Secretary of Defense, under the President’s partial mobilization authority, delegated mobilization authority to the service secretaries and permitted further delegation only to civilian officials who were appointed by the President and confirmed by the Senate. However, the Navy had not updated its mobilization authority guidance, and consequently the Secretary of the Navy continued delegating mobilization authority to the Chief of Naval Operations and the Commandant of the Marine Corps, who in turn continued to approve mobilizations until 2003. When the Assistant Secretary of the Navy became aware that mobilization authority had been improperly delegated to military leaders within the Department of the Navy, he rescinded the delegated authority and reviewed and revalidated previously approved mobilizations, in addition to all new mobilization requests. In some cases, the failure of mobilization guidance to define the roles and responsibilities of officials participating in the mobilization process also resulted in delays. For example, the Air Force found that the roles and responsibilities of its crisis action teams had not been adequately defined and that there was insufficient coordination between these crisis action teams during the planning and execution stages of the mobilization process. This led to different interpretations of the policies concerning the use of volunteers. Moreover, a lack of an established coordinated process resulted in delays getting policy, guidance, and tasks to the field. For example, whereas the requirement is to mobilize within 72 hours, there were instances where the mobilization process took 9 days. The services have used two primary approaches—predictable operating cycles and formal advanced notification—to provide time for units and servicemembers to prepare for upcoming mobilizations and deployments. All the services provide predictability to portions of their active forces through some type of standard operating cycle, but only the Air Force has a standard operating cycle that brings predictability to both its active and reserve forces. The Army assigns priority categories to its units, and lower- priority units generally need extra training and preparation time prior to deploying. Advanced mobilization notice, while important, does not provide the long lead times made possible by predictable operating cycles. The increased use of the Army’s reserve forces heightens the need for predictability so these units and individuals can prepare for upcoming mobilizations and deployments. The Air Force is the only service that uses a standard operating cycle— providing deployments of a predictable length that are preceded and followed by standard maintenance and training periods—to bring predictability to both its active and reserve forces. The Navy and the Marine Corps have used a variety of operating cycles to bring such predictability to portions of their forces. Likewise, the Army has used an operating cycle concept to bring predictability to a portion of its active force, under its Division Ready Brigade program. Key officials throughout DOD have acknowledged the importance of predictability in helping reserve forces to prepare for mobilization and deployment. Predictability helps units anticipate (1) downtime, so they can schedule lengthy education and training for personnel and lengthy maintenance for equipment and (2) the likely periods of mobilization or deployment, so they can focus on efforts to increase readiness, including last minute training and the screening of medical, dental, and personnel records. Predictability helps individual reservists by giving them time to prepare their civilian employers and family members for their possible departures. In the years following the 1991 Persian Gulf War, the Air Force Reserve and Air National Guard forces, which already had the highest tempos of any of DOD’s reserve component forces, faced increasing tempos. In August 1998, the Air Force adopted the Expeditionary Aerospace Force concept to help it manage its commitments while reducing the deployment burden on its people. This concept established a standard 15-month operating cycle and divided the Air Force into 10 groups, each containing a mix of active, Air National Guard, and Air Force Reserve forces. Two groups were scheduled to deploy during each of the five, 3-month increments within the standard 15-month operating cycle. However, because two groups contained more forces than were generally needed to cover worldwide contingency operations, and because the predictable cycles provided reservists with months of advance notice, the Air Force Reserve and the Air National Guard were able to rely on volunteers to meet significant portions of their requirements, thus avoiding large-scale involuntary mobilizations. While the predictability offered by the Air Force’s standard operating cycle has proved beneficial during “steady state” operations, the Expeditionary Aerospace Force concept is not yet able to deal with large and rapid surges in requirements. When the concept was first implemented, Air Force officials stated that the expeditionary concept would not be used to deploy forces to a major war prior to 2007. In the months immediately following the September 11th attacks and during the buildup for—and execution of—the 2003 war in Iraq, the Expeditionary Aerospace Force operating cycles broke down. For example, personnel with certain high-demand skills were involuntarily mobilized for longer than the intended 3 months—up to 2 full years, in some cases. However, for much of 2002, the Air Force used its operating cycles, and it has a plan to return to normal 15-month operating cycles by March of 2004. The Army prioritizes its units, and lower-priority units generally need extra training and preparation time prior to deploying. The Army allocates human capital and other resources using a tiered resourcing system that is based on the placement of units in existing operation plans. Units that are identified as the first to mobilize and deploy are resourced at the highest level. Units identified for later deployment are placed in subsequently lower resourcing tiers, based on their planned deployment dates. A unit’s resource tier affects its priority with respect to (1) recruiting and filling vacancies, (2) full-time staffing, (3) filling equipment needs, (4) maintaining equipment, (5) obtaining access to schools and training seats, and (6) funding for extra drills. Consequently, lower-priority units need more time to prepare for mobilization and deployment. The Army’s resourcing strategy is a cost-effective means for maintaining the Army’s reserve forces when reserve forces will have long lead times to mobilize. However, a large number of reserve forces were quickly mobilized—from less than 30,000 on January 1, 2003, to over 150,000 on March 26, 2003—to respond to the rapid surge in requirements for operations related to Operation Iraqi Freedom and the global war on terrorism. Because existing operation plans had not accurately identified all mobilization requirements, a number of lower-priority units were mobilized with relatively little advance notice. For example, 5 transportation companies containing 976 reservists were alerted on February 9, 2003, and told to arrive at their mobilization stations by February 14, 2003. On January 20, 2003, four other lower-priority Army National Guard companies, with over 1,000 reservists, were alerted and told to report to their mobilization stations by January 27, 2003. If these units had been able to plan for their mobilizations and deployments based on a standard operating cycle, they may have been able to complete some of their mobilization requirements during normally scheduled training periods prior to their mobilizations. Despite the large number of lower-priority units within the Army National Guard and the Army Reserve, the Army does not have a standard operating cycle concept to provide predictability to its reserve forces. Without such a concept, the Army’s opportunities to provide extra training and preparation time to its reserve forces, particularly those with low priorities, are limited. OSD established a goal of providing reservists with at least 30 days notice prior to mobilization when operationally feasible, but such advanced notice does not provide the longer lead times made possible by predictable operating cycles. Nonetheless, OSD’s advanced notice policy was written in recognition of the benefits of such notice to individual reservists. The Army, lacking a standard operating cycle to provide predictability for its reservists, strives to provide its reservists with official written orders 30 days in advance of mobilizations in accordance with DOD’s policy. However, in the early days following September 11, 2001, this level of advanced notice was often not possible because reservists were required immediately. In the weeks and months that followed, advanced notice increased. Army data covering the mobilizations of over 6,400 personnel between June and August of 2002 showed that 83 percent of the personnel had 4 or more weeks advanced notice. However, advanced notice dropped again in the weeks leading up to Operation Iraqi Freedom. During the first 15 days of March 2003, 95 percent of the Army units that were mobilized received less than 30 days advanced notice, and 8 percent of the units received less than 72 hours advanced notice. Much of this short notice is attributable to the extra time that was required to validate and approve requirements under the modified mobilization process. While 30 days advanced notice is clearly beneficial to individual reservists, it does not provide the longer lead times made possible by predictable operating cycles. As discussed earlier, such cycles allow reserve units, which typically drill only once every 30 days, to schedule their training and maintenance so the units’ readiness will build as the mobilization time approaches. While always important, predictability and preparation times are likely to become even more important when the pace of reserve operations is high. Figure 3, on page 16, shows the shift that occurred in July 2002 when the number of Army reservists on active duty exceeded the number of Air Force reservists on active duty. The figure also shows the dramatic increase in Army mobilizations in 2003. During calendar year 2002, the Army had an average of about 30,000 reserve component members mobilized each week. By February 12, 2003, the Army had more than 110,000 reservists mobilized, and mobilizations peaked in March 2003, when more than 150,000 of the 216,811 reservists mobilized were members of the Army National Guard or the Army Reserve. On June 18, 2003, over 139,000 Army reservists were still mobilized, and the Army Manpower and Reserve Affairs office projected that mobilizations would remain high at least through the end of 2004. Given its ongoing commitments in Iraq, the Balkans, Afghanistan, and at home, many of the Army’s reserve component forces will likely face the same types of high operational tempos that Air National Guard and Air Force Reserve forces faced in the 1990s. As described above, the Air Force has effectively used predictable operating cycles to help prepare its reserve units and individuals for mobilization and deployment and to mitigate the negative factors associated with high operational tempos. However, the Army does not employ such operating cycles for its reserve forces, thus leaving those forces with limited time to prepare for the increased mobilization and deployment demands facing them. After September 11, 2001, mobilizations were hampered because about one-quarter of the Ready Reserve was not readily accessible. Some Selected Reserve members could not be mobilized due to the lack of training. Furthermore, the services lack information that is needed to make full use of the IRR. Finally, OSD and service policies reflect a reluctance to use the IRR, resulting in situations where Ready Reserve forces were not readily available for mobilization or deployment. In fiscal year 2002, most of the military’s approximately 880,000 Selected Reserve members were available for mobilization and deployment, but over 70,000 Selected Reserve members had not completed the individual training that is required prior to deploying. By law, members of the armed forces are not permitted to deploy outside the United States and its territories until they have completed the basic training requirements of the applicable military services. The law further stipulates that in time of a national emergency (such as the one in effect since September 11, 2001) the basic training period may not be less than 12 weeks, except for certain medical personnel. The over 70,000 Selected Reserve members who were not deployable in fiscal year 2002 included personnel who had entered the service and were awaiting their initial active duty training, personnel who were awaiting the second part of a split initial active duty training program, and reservists who were still participating in initial active duty training programs. Each year between fiscal year 1997 and 2002, 7 to 10 percent of Selected Reserve members were not deployable because they had not completed their required initial training. While most members of the Selected Reserve had met the initial active duty training requirements in fiscal year 2002 and were therefore available for mobilization, a portion of these personnel belonged to units that would have required lengthy periods of unit training before they would have been deployable. In particular, the reserve forces from the Army’s bottom two resourcing categories generally require lengthy postmobilization training periods before they are deployable. Because both the Presidential Reserve Call-up and partial mobilization authorities prevented the services from mobilizing reservists specifically for training, the Army could not use many of its tier three and four Guard and Reserve units to meet requirements that had to be filled immediately. On April 10, 2003, DOD proposed that Congress change portions of the United States Code to allow the military departments to order reservists to active duty for up to 90 days of training in order to meet deployment standards. The services lack the vital information necessary to fully use their IRR pools of over 300,000 pretrained individual reservists. Many of the IRR members were inaccessible because the services did not have valid contact information (addresses or phone numbers) for these individuals. Moreover, the services’ use of three primary access methods—exit briefings, questionnaires, and screenings—did not obtain the results necessary to gain and maintain access to their IRR members. Finally, the services have not developed results-oriented goals and performance measures to improve the use of their primary methods to access IRR members. The services could not access many IRR members because they did not have valid addresses or phone numbers for the members. For example, in April 2003, the Army estimated that it had inaccurate addresses for more than 40,400 of its IRR members. When the services were able to contact their IRR members and obtain the vital information necessary to use its IRR pool, exemptions and delays often limited the services’ abilities to fully use these personnel. For example, in February 2003, the Army sent mobilization orders to 345 IRR members, but 164 of these reservists requested and were granted exemptions for specific reasons, such as medical issues, so they did not have to deploy, and another 35 were granted delays in their reporting dates. The services’ use of their three primary IRR access methods did not obtain the results necessary to gain and maintain full access to their IRR members. These methods include (1) briefings provided to members when they leave active duty or a drilling reserve position; (2) questionnaires to verify basic member information, such as contact information; and (3) 1-day screenings to verify member fitness for mobilization. First, the services brief the members when they leave active duty or a Selected Reserve position. These briefings are designed to make the individuals aware of their responsibilities as members of the IRR. However, mobilized reservists that we spoke with said that IRR responsibilities had not been clearly explained during exit briefings when they left active duty. For example, Marine Corps reservists stated that the separation briefings did not provide the detail necessary for them to fully understand their commitment and responsibilities when entering the IRR. They stated individuals conducting these briefings should emphasize that reservists entering the IRR must keep their reserve component informed of specific changes, including their home address, marital status, number of dependents, civilian employment, and physical condition. They added that reservists assigned to the IRR need to know that they may volunteer for active duty assignments to refresh or enhance their military skills. Next, the services send the members questionnaires to verify basic information—such as current addresses, marital status, and physical condition—to ascertain whether the reservists are available immediately for active duty during a mobilization. However, response rates to the questionnaires have been considered low, as shown in table 3. The services attributed the low response rate, in part, to incorrect mailing addresses as indicated by the questionnaires returned as undeliverable. During fiscal year 2002, for example, the Air Force stated that 12 percent of the questionnaires mailed out were returned as undeliverable. The Air Force is the only service that specifically tracks undeliverable rates, but the Navy estimated a 30 to 40 percent undeliverable rate and the Army estimated that approximately 30 percent of its questionnaires were returned as undeliverable. The Coast Guard has not measured the number of questionnaires returned as undeliverable. Although the Marine Corps did not send out questionnaires in fiscal year 2002 and could not provide documented response rates for prior years, a Marine Corps official indicated that the Corps had experienced about a 10 percent undeliverable rate in previous years; but he was unable to provide any data to support the claim. According to this official, most of the returned questionnaires were mailed to junior enlisted personnel, including lance corporals, corporals, and sergeants who appeared to change residences more frequently than senior enlisted personnel or officers. The services have taken some specific steps to correct bad addresses and improve servicemember reporting of required mobilization-related information. Specifically, the Army, the Navy, and the Marine Corps use commercial contractors to try to update inaccurate address information. For the last 4 years, a contractor has been regularly matching the Army’s entire personnel database of bad addresses with a credit bureau’s address database. For over 10 years, the Army has used another contractor to update a small number of addresses, one at a time. Despite these efforts, the Army still had over 40,000 bad addresses in its database as of April 2003, and it recently contracted with its second contractor to do batch updates rather than one-at-a-time updates. The Marine Corps just started using its contractor. Finally, the Army and the Coast Guard have implemented Web-based systems that encourage IRR members to update critical contact information on the Internet. According to an official representing the Naval Reserve Personnel Center, the Navy has also started to create a Web-based screening questionnaire to better track IRR members. However, these efforts are not linked to a results-oriented management framework that establishes specific goals to improve access to accurate addresses and identifies the resources and performance measures necessary to ensure success. Finally, the services order a small number of their IRR members to participate in a 1-day screening event at a specific site to verify they are fit and available for mobilization. The screening events focus on a specific number of IRR members to verify their physical existence, condition, and personal contact data. Even though the total number of IRR members ordered to report for screening during a fiscal year is relatively small, the services have met with limited success as the screening event participation rates in table 4 indicate. As indicated in table 4, the Army and the Air Force have not conducted screenings since 2000 and 2001, respectively. An Army Personnel Command After Action Report concluded that screenings should not be conducted until clear objectives are established and realistic cost and benefit assessments are completed. The Air Force also decided not to conduct screening events. Thus, these two services are not using one of their three primary methods to gain and maintain access to their IRR members. Furthermore, table 3 shows that the participation rates are relatively low. The services indicated that the low screening event participation rates were based on the services’ inability to contact members of the IRR because of incorrect addresses; IRR members who were excused because of stated conflicts involving work, vacation plans, religious issues, or physical disabilities among others; and members who ignored orders and avoided participation in the screening events. The services do not have results-oriented goals and performance measures to improve their reliance on the three primary methods to access IRR members. Specifically, the services have concentrated their efforts on exit briefings, questionnaires to update critical information, and periodic screening events. However, they have not focused on the results of those activities, as evidenced by persistent low response rates to questionnaires and low screening event participation rates. By focusing on the execution of these activities rather than their results, the services have not established objective, quantifiable, and measurable performance goals to improve the results of their three primary efforts to access; established a basis for comparing actual program results with the goals in order to develop performance indicators to track their progress in attaining results-oriented goals; and described the resources and means required to verify and validate measured values. OSD and service policies have discouraged the use of the IRR because IRR members do not participate in any regularly scheduled training and thus are not regularly paid. The policies are also intended to avoid the negative effects on individual IRR members. For example, the Under Secretary of Defense for Personnel and Readiness provided guidance dated July 19, 2002, to the services that emphasized the use of volunteers before involuntarily mobilizing reservists to minimize the effects of mobilization on the lives of the reservists, their families, and their employers. Policies intended to avoid the negative effects on individual reservists may be disruptive to all reservists as well as to entire units, because they contribute to situations where individual mobilization requirements are filled with personnel from reserve units, thus creating personnel shortages within the units that had supplied the reservists and affecting the units’ readiness to mobilize and deploy. For example, in its reluctance to use the IRR, the Army filled many of its individual mobilization requirements with personnel from reserve units. In doing so, the Army created personnel shortages within the units that had supplied the reservists. In some cases, the Army had to later locate and transfer replacement personnel into these units when the units were mobilized, thus transferring several unit personnel as a result of a single individual requirement. Specifically, the Army mobilized a combat support hospital unit that was 142 individuals short, including the commanding officer, of its authorized strength of 509 personnel. To increase the hospital unit’s strength to an acceptable level for mobilization, the Army took a commanding officer and other needed personnel from four reserve units. By taking this course of action, the Army immediately degraded the mission capability and readiness of the four affected units. The Army compounded this negative effect when it later mobilized the already significantly degraded unit that gave up its commanding officer to the hospital unit. Further, the reluctance of one service to use the IRR can affect other services. For example, the Air Force’s reluctance to access any of its more than 44,000 IRR members has left the responsibility for guarding Air Force bases to over 9,000 Army National Guard unit personnel. According to a senior Air Force official, the Air Force did not even consider using its own IRR pool. Because the Army National Guard volunteered for the mission, the Air Force did not consider mobilizing any of its 3,900 IRR members who held security force specialty codes. About 300,000 of the 1.2 million National Guard and Reserve personnel have been called to active duty since September 11, 2001. They fought on the front lines in Iraq; tracked down terrorists throughout Asia and Africa; maintained the peace in the Balkans, Afghanistan, and now Iraq; and participated in a wide range of domestic missions. However, the process to mobilize reservists had to be modified and contained numerous inefficiencies. Existing operation plans did not adequately address the mobilization requirements needed to deal with terrorist attacks and overseas requirements. We recognize that some threats are impossible to predict but until the combatant commanders identify all of the mobilization requirements that have evolved since September 11, 2001— and create or update their operation plans as necessary to account for these requirements—DOD risks the continued need for additional management oversight and coordination between officials to fill mobilization requirements, thus slowing the mobilization effort and making it less efficient. DOD officials also did not have visibility over the entire mobilization process. Specifically, without the ability to capture the readiness of personnel and other resources within the small units that were frequently needed by combatant commanders, the Army and the Navy will continue to face difficulties in their efforts to assemble the forces needed to meet joint organizational requirements. Furthermore, until all of the services develop fully integrated automated systems that provide for the seamless transfer of reservists’ information between reserve and active components, the components will continue to face cumbersome workarounds to obtain the data to track the length of reservists’ mobilization or their deployment status. In addition, until the services update key mobilization instructions, notices, and publications to reflect the modified mobilization process, DOD and the services risk continued mobilization slowdowns and duplication of efforts. All of the services provide predictability to portions of their active forces through some type of standard operating cycle, but only the Air Force has a standard operating cycle that brings predictability to both its active and reserve forces. Moreover, the Army’s reserve forces face increasing use to meet operational requirements. However, without a standard operating concept to help increase predictability for its units, the Army risks mobilizing units and individuals that are unprepared for deployment. Finally, the services have limited access to portions of the Ready Reserve and are thus forced to spread requirements across the remaining reserve force, leading to longer or more frequent deployments. Specifically, the services’ use of their primary IRR access methods—exit briefings, questionnaires, and screenings—did not obtain the results necessary to gain and maintain access to their members. Until the services develop results-oriented goals and performance measures to improve the use of their primary methods to access IRR members, the services will be unable to systematically identify opportunities to better access their IRR members for mobilization. Moreover, OSD and service policies have discouraged the use of the IRR in order to avoid the negative effects on individual IRR members. However, until the services review and update their IRR policies to take into account the nature of the mobilization requirements and the types of reservists who are available to fill the requirements, the services will risk the continued disruption to units that provide individual personnel rather than mobilizing IRR members. We are making several recommendations to enhance the overall efficiency of the reserve mobilization process. Specifically, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to identify all of the mobilization requirements that have evolved since September 11, 2001, and create or update operation plans as necessary, to account for these requirements; the Secretaries of the Army and the Navy to capture readiness information on the resources within all the units that are available to meet the tailored requirements of combatant commanders so that these resources will be visible to key mobilization officials within DOD, the Joint Staff, and the service headquarters; the Under Secretary of Defense for Personnel and Readiness, in conjunction with the Assistant Secretary of Defense for Reserve Affairs, to develop a single automated system or fully integrated automated systems that will provide for the seamless transfer of reservists information, regardless of whether the reservists are in an active or reserve status; the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, and the Assistant Secretary of the Air Force for Manpower and Reserve Affairs to update their applicable mobilization instructions, notices, and publications; the Secretary of the Army to develop a standard operating cycle concept to help increase predictability for Army reserve units; the service secretaries to develop and use results-oriented performance metrics to guide service efforts to gain and maintain improved information on IRR members; and the service secretaries to review and update their IRR policies to take into account the nature of the mobilization requirements as well as the types of reservists who are available to fill the requirements. In written comments on a draft of this report, DOD generally concurred with our recommendations. The department specifically concurred with our recommendations to (1) create or update operation plans as necessary, to account for mobilization requirements that have evolved since September 11, 2001, (2) develop an automated system to provide for the seamless transfer of reservists’ information, (3) update mobilization notices and publications, (4) develop a standard operating cycle to increase predictability for Army Reserve and National Guard units, (5) develop and use results-oriented performance metrics to gain and maintain information on IRR members, and (6) update IRR policies to take into account the nature of mobilization requirements and the types of reservists who are available to fill the requirements. DOD partially concurred with our recommendation that the Army and the Navy capture readiness information on the resources within all units that are available to meet the tailored requirements of combatant commanders so that these resources will be visible to key officials within DOD. DOD stated that the Army and the Navy fully support capturing relevant information in the DOD readiness reporting system but that combatant commanders will need to establish resource requirements to include tailored mission requirements. We agree that improvements in readiness reporting should be closely linked to efforts to more clearly define requirements. DOD also stated that the Army is currently developing and implementing a system to provide visibility on readiness issues in support of the combatant commanders. We did not evaluate this system because it was not fully implemented during our review. DOD also provided technical comments from the Joint Staff, and we received technical comments from the Coast Guard. These technical comments were incorporated in the final draft as appropriate. DOD’s comments are reprinted in appendix II. We performed our work between September 2002 and June 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Chairman of the Joint Chiefs of Staff; the Secretary of Transportation; and the Commandant of the Coast Guard. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-5559 or stewartd@gao.gov or Brenda S. Farrell at (202) 512-3604 or farrellb@gao.gov. Others making major contributions to this report are included in appendix III. To evaluate whether the Department of Defense (DOD) and the services followed their existing operation plans when mobilizing reserve forces after September 11, 2001, we reviewed and analyzed a small group of requests for forces from the combatant commanders and discussed differences between planned and actual requirements with the mobilization officials responsible for validating and approving mobilization requirements. To find out how the services screen and fill requirements, as well as their procedures for turning generic “capability” type requirements into actual unit and personnel requirements, we met with, and collected and analyzed data from, a variety of active and reserve component offices within each of the services. Specifically, we met with officials from the following offices or commands: Department of the Army, Army Operations Center; Office of the Chief, Army Reserve; Army National Guard, Headquarters; U.S. Army Forces Command, Fort McPherson, Georgia; U.S. Army Reserve Command, Fort McPherson, Georgia; Department of the Air Force, Headquarters; Air National Guard, Headquarters; Air National Guard Readiness Center; Air Mobility Command, Scott Air Force Base, Illinois; Air Force Reserve Command, Robins Air Force Base, Georgia; Air and Space Expeditionary Force Center, Langley Air Force Base, Navy Personnel Command, Millington, Tennessee; Commander Naval Forces Command, New Orleans, Louisiana; U.S. Marine Corps Manpower and Reserve Affairs, Headquarters, Marine Forces Reserve, Headquarters, New Orleans, Louisiana; U.S. Coast Guard, Headquarters; and U.S. Coast Guard Atlantic Area Maintenance Logistic Command, Norfolk, Virginia. We reviewed our prior work on risk management and issues related to combating terrorism. We met with RAND Corporation officials to discuss and coordinate ongoing work related to the requests for forces. We also met with the Assistant Secretaries of the Army, the Navy, and the Air Force who are responsible for approving mobilization orders. To determine the extent to which responsible officials had visibility over the entire mobilization process, we reviewed sections of the United States Code, Executive Orders, Secretary of Defense memoranda, Joint Staff publications, and service instructions related to mobilization. We also met with senior and key mobilization officials involved with the various phases of the mobilization process to document their roles and responsibilities and collect data about the process. We observed a 2-1/2 day DOD symposium in November 2002, where senior military and civilian officials came together to review the entire mobilization process. We reviewed relevant GAO reports and reports from other audit and inspection agencies. We also met with Army Audit Agency and Air Force Audit Agency officials. We reviewed the services’ detailed flowcharts, which documented the mobilization process from different service perspectives. We also discussed and observed the operation of the classified and unclassified automated systems that are being used to track mobilized units and individuals, as well as mobilization requirements. The Office of the Assistant Secretary of Defense for Reserve Affairs served as our primary source for aggregate personnel and mobilization data. However, data from the early days and weeks following September 11, 2001, are not reliable. Further, the services captured mobilization data differently over time, making it difficult to aggregate the data. To present the data consistently, our figures display data beginning with January 2002. To evaluate the services’ approaches to provide predictability to reservists subject to mobilization and deployment, we met with officials from the Air Force offices that were responsible for the development and implementation of the rotational Air Expeditionary Force concept and analyzed data that documented the successes and challenges that the program had experienced since September 11, 2001. We discussed the 30-day advance notice goal with service officials and officials from the Office of the Secretary of Defense (OSD) office, which had issued the goal. We also discussed efforts to increase advanced warning or predictability with officials from the Army, the Navy, and the Marine Corps and, where data were available, compared alert dates to mobilization dates. To gain first-hand knowledge about the effects of mobilizations on individual reservists, we visited a number of sites where reservists were deployed or were undergoing mobilization processing and training. At these sites, we collected mobilization data, obtained copies of mobilization processing checklists, and observed the preparations for deployment that take place after reservists have been mobilized. Specifically, we met with officials from the offices or commands listed below: Army Headquarters, I Corps, Fort Lewis, Washington; 4th Brigade, 91st Division (Training Support), Fort Lewis, Washington; 2122nd Garrison Training Support Brigade, Fort Lewis, Washington; 2122nd Garrison Support Unit, North Fort Lewis, Washington; Soldier Readiness Processing Site, Fort Lewis, Washington; Soldier Readiness Processing Site, Fort McPherson, Georgia; Central Issue Facility, Fort Lewis, Washington; Navy Mobilization Processing Site, Millington, Tennessee; Navy Mobilization Processing Site, San Diego, California; Marine Corps Mobilization Processing Center, Mobilization Support Battalion, Camp Pendleton, California; 452nd Air Mobility Wing, March Air Reserve Base, California; and Coast Guard Integrated Support Command, Portsmouth, Virginia. While at these sites, we interviewed individual and unit reservists who had been mobilized, as well as the active duty, reserve, and civilian officials who were conducting the mobilization processing and training. At the mobilization processing stations, we observed reservists getting medical, legal, and family support briefings; having their personnel, medical, and dental records screened and updated; and receiving inoculations, combat equipment, camouflage clothing, Geneva Convention Cards, identification tags, and the controlled access cards that have replaced laminated identification cards. We also observed weapons qualification training. To determine the extent to which Ready Reserve forces were available for mobilization, we reviewed sections of the United States Code and OSD and service policies on the use of the Individual Ready Reserve (IRR). We collected and analyzed longitudinal data on the sizes of different segments of the Ready Reserve. We examined the data for trends, specifically focusing on the IRR and the portion of the Selected Reserve that was still in the training pipeline. We also collected and analyzed data from the commands that are responsible for managing the IRR, specifically the U.S. Army Reserve Personnel Command, St. Louis, Missouri; the Naval Reserve Personnel Center, New Orleans, Louisiana; the Air Reserve Personnel Center, Denver, Colorado; the Marine Corps Reserve Support Command, Kansas City, Missouri; and the Coast Guard Personnel Command, Washington, D.C. Officials from these commands also provided data on IRR members that we analyzed to determine (1) response rates to questionnaires to verify basic member information and (2) participation rates at 1-day screening events to verify member fitness for mobilization. We conducted our review from September 2002 through June 2003 in accordance with generally accepted government auditing standards. James R. Bancroft, Larry J. Bridges, Michael J. Ferren, Chelsa L. Kenney, Irene A. Robertson, and Robert K. Wild also made significant contributions to the report.
On September 14, 2001, President Bush proclaimed that a national emergency existed by reason of the September 11, 2001, terrorist attacks. Under section 12302 of title 10, United States Code, the President is allowed to call up to 1 million National Guard and Reserve members to active duty for up to 2 years. GAO was asked to review issues related to the call-up of reservists following September 11, 2001. GAO examined (1) whether the Department of Defense (DOD) followed existing operation plans when mobilizing forces, (2) the extent to which responsible officials had visibility over the mobilization process, and (3) approaches the services have taken to provide predictability to reservists. GAO also determined the extent to which the Ready Reserve forces, which make up over 98 percent of nonretired reservists, were available. About 300,000 of the 1.2 million National Guard and Reserve personnel have been called to active duty since September 11, 2001. They fought on the front lines in Iraq; tracked terrorists throughout Asia and Africa; maintained the peace in the Balkans, Afghanistan, and now Iraq; and participated in a wide range of domestic missions. However, DOD's process to mobilize reservists after September 11 had to be modified and contained numerous inefficiencies. Existing operation plans did not fully address the mobilization requirements needed to deal with the terrorist attacks or uncertain overseas requirements. For example, no previous requirements called for the extended use of National Guard and Reserve members to fly combat air patrols over the nation's capital and major cities. Because DOD could not rely on existing operation plans to guide its mobilizations, it used a modified process that relied on additional management oversight and multiple layers of coordination, which resulted in a process that was slower and less efficient than the traditional process. Under the modified process, the Secretary of Defense signed 246 deployment orders to mobilize over 280,000 reservists compared to the less than 10 deployment orders needed to mobilize over 220,000 reservists during the 1991 Persian Gulf War. DOD did not have visibility over the entire mobilization process primarily because it lacked adequate systems for tracking personnel and other resources. DOD's primary automated readiness reporting system could not adequately track the personnel and other resources within the small units that were frequently needed. Also, visibility was lost because some services' active and reserve systems for tracking personnel were incompatible, resulting in ad hoc coordination between active and reserve officials. Both groups often resorted to tracking mobilizations with computer spreadsheets. In addition, some reservists were deployed beyond dates specified in their orders or stayed on alert for more than a year and never mobilized because officials lost visibility. The services have used two primary approaches--predictable operating cycles and advance notification--to provide time for units and personnel to prepare for mobilizations. All the services provide predictability to portions of their forces through some type of standard operating cycle, but only the Air Force has a standard operating cycle that brings predictability to both its active and reserve forces. The Army prioritizes its units, and lower-priority units generally need extra training and preparation time before deploying. Yet, since September 11, a number of lower-priority units have been mobilized with relatively little advance notice. Despite the large number of lower-priority units within the Army Guard and Reserve, the Army does not have a standard operating cycle to provide predictability to its reserves. Without such a concept, the Army's opportunities to provide extra training and preparation time to its reserve forces are limited. Mobilizations were hampered because one-quarter of the Ready Reserve was not readily available for mobilization. Over 70,000 reservists could not be mobilized because they had not completed their training requirements, and the services lacked information needed to fully use the 300,000 pretrained IRR members.
Under section 219 of the Immigration and Nationality Act, as amended, the Secretary of State, in consultation with the Secretary of the Treasury and the Attorney General, is authorized to designate an organization as an FTO. For State to designate an organization as an FTO, the Secretary of State must find that the organization meets three criteria: 1. It is a foreign organization. 2. The organization engages in terrorist activity or terrorism, or retains the capability and intent to engage in terrorist activity or terrorism. 3. The organization’s terrorist activity or terrorism threatens the security of U.S. nationals or the national security of the United States. Designation of a terrorist group as an FTO allows the United States to impose certain legal consequences on the FTO, as well as on individuals that associate with or knowingly provide support to the designated organization. It is unlawful for a person in the United States or subject to the jurisdiction of the United States to knowingly provide “material support or resources” to a designated FTO, and offenders can be fined or imprisoned for violating this law. In addition, representatives and members of a designated FTO, if they are not U.S. citizens, are inadmissible to and, in certain circumstances, removable from the United States. Additionally, any U.S. financial institution that becomes aware that it has possession of or control over funds in which a designated FTO or its agent has an interest must retain possession of or control over the funds and report the funds to Treasury’s Office of Foreign Assets Control. In addition to making FTO designations, the Secretary of State can address terrorist organizations and terrorists through other authorities, including listing an individual or entity that engages in terrorist activity under Executive Order 13,224 (E.O. 13,224). E.O. 13,224 requires the blocking of property and interests in property of foreign persons the Secretary of State has determined, in consultation with the Attorney General and the Secretaries of the Departments of Homeland Security and the Treasury, to have committed or to pose a significant risk of committing acts of terrorism that threaten the security of U.S. nationals or the national security, foreign policy, or economy of the United States. E.O. 13,224 blocks the assets of organizations and individuals designated under the executive order. It also authorizes the blocking of assets of persons determined by the Secretary of the Treasury, in consultation with the Attorney General and the Secretaries of State and Homeland Security, to assist in; sponsor; or provide financial, material, or technological support for, or financial or other services to or in support of, designated persons, or to be otherwise associated with those persons. In practice, when State designates an organization as an FTO, it also concurrently designates the organization under E.O. 13,224. Once State designates an organization under E.O. 13,224, Treasury is able to make its own designations under E.O. 13,224 of other organizations and individuals associated with or providing support to the organization designated by State under E.O. 13,224. These designations allow the U.S. government to target organizations and individuals that provide material support and assistance to FTOs. State has developed a six-step process for designating foreign terrorist organizations. State’s Bureau of Counterterrorism (CT) leads the designation process for State, and other State bureaus and agency partners are involved in the various steps. While the number of FTO designations has varied annually since the first 20 FTOs were designated in 1997, as of December 31, 2014, 59 organizations were designated as FTOs. FTO designation activities are led by CT, which monitors the activities of terrorist groups around the world to identify potential targets for designation. When reviewing potential targets, CT considers not only terrorist attacks that a group has carried out but also whether the group has engaged in planning and preparations for possible future acts of terrorism or retains the capability and intent to carry out such acts. CT also considers recommendations from other State bureaus, federal agencies, and foreign partners, among others, and selects potential target organizations for designation. For an overview of agencies and their roles in the designation process, see appendix II. After selecting a target organization for possible designation, State uses a six-step process it has developed to designate a group as an FTO (see fig. 1). Step 1: Equity check—The first step in CT’s process is to consult with other State bureaus, federal agencies, and the intelligence community, among others, to determine whether any law enforcement, diplomatic, or intelligence concerns should prevent the designation of the target organization. If any of these agencies or other bureaus has a concern regarding the designation of the target organization, it can elect to place a “hold” on the proposed designation, which prevents the designation from being made until the hold is lifted by the entity that requested it. The equity check is the first step where an objection to a designation can be raised; however, in practice, a hold can be placed at any step in the FTO designation process prior to the Secretary’s decision to designate. Step 2: Administrative record—As required by law, in support of the proposed designation, CT is to prepare an administrative record, which is a compilation of information, typically including both classified and open source information, demonstrating that the target organization identified meets the statutory criteria for FTO designation. Step 3: Clearance process—The third step in CT’s process is to send the draft administrative record and associated documents to State’s Office of the Legal Adviser and then to Justice and Treasury for review and approval of a final version to submit to the Secretary of State. For clearance, Justice and Treasury are to review the draft administrative record prepared by State and may suggest that State make changes to the document. The interagency clearance process is complete once Justice and Treasury provide State with signed letters of concurrence indicating that the administrative record is legally sufficient. CT is then to send the administrative record to other bureaus in the State Department for final clearance. Step 4: Secretary of State’s decision—Materials supporting the proposed FTO designation are to be sent to the Secretary of State for review and decision on whether or not to designate. The Secretary of State is authorized, but not required, to designate an organization as an FTO if he or she finds that the legal elements for designation are met. Step 5: Congressional notification—In accordance with the law, State is required to notify Congress 7 days before an organization is formally designated. Step 6: Federal Register notice—State is required to publish the designation announcement in the Federal Register and, upon publication, the designation is effective for purposes of penalties that would apply to persons who provide material support or resources to designated FTOs. As of December 31, 2014, there were 59 organizations designated as FTOs, including al Qaeda and its affiliates, Islamic State of Iraq and the Levant (ISIL), and Boko Haram. See appendix III for the complete list of FTOs designated, as of December 31, 2014. The number of FTO designations has varied annually since the first FTOs were designated, in 1997. State designated 13 groups between 2012 and 2014. Figure 2 shows the number of organizations designated by year of designation, as of December 31, 2014. According to State officials and our review of agency documents, State considered information and input provided by other State bureaus and federal agencies for all 13 designations made between 2012 and 2014. State considered this input during the first three steps in its designation process: conducting the equity check, compiling the administrative record, and obtaining approval in the clearance process. During our review of the 13 FTO designations between 2012 and 2014, officials from the Departments of Defense, Homeland Security, Justice, and the Treasury, and the Office of the Director of National Intelligence (ODNI) reported that State considered their input when making designations. Specifically, we found that State considered information during the first three steps in the FTO designation process, including the following: Step 1: Equity check—According to State officials, regional bureaus at State and other agencies provided input to CT during the equity check step by identifying, when warranted, any law enforcement, diplomatic, or intelligence equities that would be jeopardized by the designation of the target organization. Officials from Defense, DHS, Justice, Treasury, and the intelligence community also confirmed that they provided input during the equity check. According to State officials, other bureaus and agencies participating in the equity check included the Central Intelligence Agency, the National Counterterrorism Center, the National Security Agency, and the National Security Council Counterterrorism staff. Step 2: Administrative record—Agencies provided classified and unclassified materials to State to support the draft administrative record. For example, officials from ODNI told us they provide an assessment and intelligence review, at the request of State, for any terrorist organization that is nominated for FTO designation. U.S. intelligence agencies may also provide information to State during the equity check and during the compilation of the administrative record to support the designation. Otherwise, State has direct access to the disseminated intelligence of other agencies and does not need to separately request such information, according to CT officials. Step 3: Clearance—In accordance with the law, Justice and Treasury review the draft administrative record for legal sufficiency and provide their input to State before the administrative record is finalized. Officials from Treasury and Justice told us that State considered their input during the clearance process for the administrative record for the 13 FTO designations we examined. This consultation culminates in and is documented through letters of concurrence in support of each FTO designation signed by Treasury and Justice. In all 13 FTO designations that we reviewed, Treasury and Justice issued signed letters of concurrence. The U.S. government penalizes designated FTOs through three key consequences. First, the designation of an FTO triggers a freeze on any assets the organization holds in a financial institution within the United States. Second, the U.S. government can criminally prosecute individuals that provide material support to an FTO, as well as impose civil penalties. Third, FTO designation imposes immigration restrictions upon members of the organization and individuals that knowingly provide material support or resources to the designated organization. Over the period of our review, we found that U.S. agencies imposed all three consequences. U.S. persons are prohibited from conducting unauthorized transactions or having other dealings with or providing services to designated FTOs. U.S. financial institutions that are aware that they are in possession of or control funds in which an FTO or its agent has an interest must retain possession of or maintain control over the funds and report the existence of such funds to Treasury. As of December 31, 2013, which is the date for the most recently published Terrorist Assets Report, the U.S. government blocked funds related to 7 of the 59 currently designated foreign terrorist organizations, totaling more than $22 million (see table 1). As of December 2013, there were no blocked funds reported to Treasury related to the remaining 52 designated FTOs. According to Treasury, the reported amounts blocked by the U.S. government change over the years because of several factors, including forfeiture actions, reallocation of assets to another sanctions program, or the release of blocked funds consistent with sanctions policy. Funds shown in the table above are blocked by the U.S. government pursuant to terrorism sanctions administered by Treasury, including FTO sanctions regulations and global terrorism sanctions regulations. The FTO-related funds blocked by the United States are only funds held within the United States and do not include any assets and funds that terrorist groups may hold outside U.S. financial institutions. However, according to Treasury officials, while designation of FTOs exposes and isolates individuals and organizations, and denies access to U.S. financial institutions, in some cases, FTOs may also be sanctioned by the United Nations or other international partners, an action that may block access to the global financial system. Designation as an FTO triggers criminal liability for persons within the United States or subject to U.S. jurisdiction who knowingly provide, or attempt or conspire to provide, “material support or resources” to a designated FTO. Violations are punishable by a fine and up to 15 years in prison, or life if the death of a person results. Furthermore, it is also a crime to knowingly receive military-type training from or on behalf of an organization designated as an FTO at the time of the training. Between January 1, 2009, and December 31, 2013, which is the most recent date for which data are available, over 80 individuals were convicted of terrorism or terrorism-related crimes, that included providing material support or resources to an FTO or receiving military-type training from or on behalf of an FTO. The penalties for these convictions varied, and included some combination of imprisonment, fines, and asset forfeiture. For example, individuals convicted of terrorism or terrorism- related crimes, which included providing material support to an FTO, received sentences that included imprisonment lengths that varied between time served and life in prison, plus 95 years. In addition, sentencing for convicted individuals included fines up to $125,000, asset forfeiture up to $15 million, and supervised release for up to life. In addition, Justice may also bring civil forfeiture actions against assets connected to terrorism offenses, including the provision of material support to FTOs. U.S. law authorizes, among other things, the forfeiture of property involved in money laundering, property derived from or used to commit certain foreign crimes, and the proceeds of certain unlawful activities. Once the government establishes that an individual or entity is engaged in terrorism, it may bring forfeiture actions by proceeding directly against the assets (1) of an individual, entity, or organization engaged in planning or perpetrating crimes of terrorism against the United States or U.S. citizens; (2) acquired or maintained by any person intending to support, plan, conduct, or conceal crimes of terrorism against the United States or U.S. citizens; (3) derived from, involved in, or used or intended to be used to commit terrorism against the United States or U.S. citizens or their property; or (4) of any individual, entity, or organization engaged in planning or perpetrating any act of international terrorism. According to Justice officials, there have not been any civil forfeiture actions related to FTOs. However, Justice officials said their department routinely investigates and takes actions against financial institutions operating in the United States that willfully violate the International Emergency Economic Powers Act. They added that Justice has, for example, imposed fines and forfeitures and installed compliance monitors in cases where banks have violated terrorism-related sanctions programs. Furthermore, according to Justice officials, there are numerous other investigative and prosecutorial tools available to the United States to confront terrorism and terrorism-related conduct, disrupt terrorist plots, and dismantle foreign terrorist organizations. FTO representatives and members, as well as individuals who knowingly provide material support or resources to a designated organization who are not U.S. citizens are inadmissible to, and in some cases removable from, the United States under the Immigration and Nationality Act. However, exemptions or waivers can be granted for certain circumstances, according to State and DHS officials. For example, DHS may grant eligible individuals exemptions in cases where material support was provided under duress. Individuals found inadmissible or deportable without an appropriate waiver or exemption under these provisions are also barred from receiving most immigration benefits or relief from removal. State and DHS are responsible for enforcing different aspects of the immigration restrictions and ensuring that inadmissible individuals without an appropriate waiver or exemption do not enter the United States. State consular officers at U.S. embassies and consulates are responsible for determining whether an applicant is eligible for a visa to travel to the United States. In instances where a consular officer determines that an applicant has engaged or engages in terrorism-related activity, the visa will be denied. According to State Bureau of Consular Affairs data, between fiscal years 2009 and 2013, which was the most recent period for which data are available, 1,069 individuals were denied nonimmigrant visas and 187 individuals were denied immigrant visas on the basis of involvement in terrorist activities and associations with terrorist organizations. DHS develops and deploys resources to detect; assess; and, if necessary, mitigate the risk posed by travelers during the international air travel process, including when an individual applies for U.S. travel documents; reserves, books, or purchases an airline ticket; checks in at an airport; travels en route on an airplane; and arrives at a U.S. port of entry. For example, upon arrival in the United States, all travelers are subjected to an inspection by U.S. Customs and Border Protection to determine if the individual is eligible for admission under U.S. immigration law. According to U.S. Customs and Border Protection data, between fiscal years 2009 and 2014, which was the most recent period for which data were available, more than 1,000 individuals were denied admission to the United States for various reasons, and were identified for potential connections to terrorism or terrorist groups, including being a member of or supporting an FTO. In addition, U.S. Immigration and Customs Enforcement is responsible for deporting individuals determined to be engaged in terrorism or terrorism-related activities. Between fiscal years 2013 and 2104, which was the most recent period for which data are available, Immigration and Customs Enforcement officials indicated that 3 individuals determined to be associated with or to have provided material support to designated FTOs were removed from the United States. Further, U.S. Citizenship and Immigration Services is responsible for the adjudication of immigration benefits. An individual who is a member of a terrorist organization or who has engaged or engages in terrorist-related activity, as defined by the Immigration and Nationality Act, is deemed inadmissible to the United States and is ineligible for most immigration benefits. The law grants both the Secretary of State and the Secretary of Homeland Security unreviewable discretion to waive the inadmissibility of certain individuals who would be otherwise inadmissible under this provision, after consulting with each other and the Attorney General. Additionally, according to DHS officials, an exemption may be applied to certain terrorist-related inadmissibility grounds if the activity was carried out under duress, or under certain circumstances, such as the provision of material support in the form of medical care. Such exemptions, if applied favorably, may allow an immigration benefit to be granted. DHS officials stated that these exemptions are extremely limited. Terrorist groups, such as al Qaeda and its affiliates, Boko Haram, and ISIL, continue to be a threat to the United States and its foreign partners. The designation of FTOs, which can result in civil and criminal penalties, is an integral component of the U.S. government’s counterterrorism efforts. State’s process for designating FTOs considers input and information from several key U.S. agency stakeholders, and allows U.S. agencies to impose consequences on the organizations and individuals that associate with or provide material support to FTOs. Such consequences help U.S. counterterrorism efforts isolate terrorist organizations internationally and limit support and contributions to those organizations. We provided draft copies of this report to the Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as the Office of the Director of National Intelligence, for review and comment. The Department of Homeland Security provided technical comments, which we incorporated as appropriate. The Departments of Defense, Justice, State, and the Treasury, as well as the Office of the Director of National Intelligence, had no comments. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. GAO staff who made key contributions to this report are listed in appendix IV. This report examines the Department of State’s (State) process for designating foreign terrorist organizations (FTO) and the consequences resulting from designation. We report on (1) the process for designating FTOs, (2) the extent to which the State considers input from other agencies during the FTO designation process, and (3) the consequences that U.S. agencies impose as a result of an FTO designation. To identify the steps in the FTO designation process, we reviewed the legal requirements for designation and the legal authorities granted to State and other U.S. agencies to designate FTOs. In addition, we reviewed State documents that identified and outlined State’s process to designate an FTO, from the equity check through publishing the designation in the Federal Register. We interviewed State officials in the Bureau of Counterterrorism to confirm and clarify the steps in the FTO designation process and to identify which agencies are involved in the process and at what steps they are involved. We also interviewed officials from the Departments of Defense, Homeland Security, Justice (Justice), and the Treasury (Treasury), as well as officials from the intelligence community, to determine each agency’s level of participation in the process. To assess the extent to which State considered information from other agencies in the designation process, we interviewed officials from the Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as officials from the intelligence community, to determine when information is provided to State on organizations considered for FTO designation, as well as the nature of that information. We defined consideration as any action of State to request, obtain, and use information from other agencies, as well as letters of concurrence from those agencies. We reviewed both Justice’s and Treasury’s letters of concurrence for all 13 designations made between 2012 and 2014. We also interviewed State officials to determine how information provided by other agencies is considered during the FTO designation process. To identify the consequences U.S. agencies impose as a result of FTO designation, we reviewed the legal consequences agencies can impose under U.S. law, including the Immigration and Nationality Act, as amended. Specifically, we reviewed the FTO funds and assets related to FTOs that are blocked by U.S. financial institutions, as reported by the Office of Foreign Assets Control (OFAC) of the Department of the Treasury. We reviewed the publicly available Terrorist Assets Reports published by Treasury for calendar years 2008 through 2013, which identify the blocked assets identified and reported to Treasury related to FTOs, as well as organizations designated under additional Treasury authorities. U.S. persons are prohibited from conducting unauthorized transactions or having other dealings with or providing services to the designated individuals or entities. Any property or property interest of a designated person that comes within the United States or into the possession or control of a U.S. person is blocked and must be reported to OFAC. The Terrorist Assets Reports identify these reported blocked assets held within U.S. financial institutions that are targeted with sanctions under any of the three OFAC-administered sanctions programs related to terrorist organizations designated as FTOs, specially designated global terrorists, and specially designated terrorists under various U.S. authorities. We verified the totals reported in each of the reports and identified the funds blocked for organizations designated as FTOs. We also interviewed Treasury officials to discuss the reports of blocked assets and the changes in the assets across years. We did not analyze blocked funds for organizations that were designated under other authorities or by the United Nations or international partners. To assess the reliability of Treasury data on blocked funds, we performed checks of the year-to-year data published in the Terrorist Assets Reports for inconsistencies and errors. When we found minor inconsistencies, we discussed them with relevant agency officials and clarified the reporting data before finalizing our analysis. We determined that these data were sufficiently reliable for the purposes of our report. We also reviewed the Department of Justice National Security Division Chart of Public/Unsealed Terrorism and Terrorism Related Convictions to identify the individuals convicted of and sentenced for providing material support or resources to an FTO or receiving military-type training from or on behalf of an FTO between January 1, 2009, and December 31, 2013, which was the period for which the most recent data were available. Designation as an FTO introduces the possibility of a range of civil penalties for the FTO or its members, as well as criminal liability for individuals engaged in certain prohibited activities, such as individuals who knowingly provide, or attempt or conspire to provide, “material support or resources” to a designated FTO. We reviewed Justice data of only public/unsealed convictions from January 1, 2009, to December 31, 2013. For the purposes of our report, we analyzed the Justice data on the convictions and sentencing associated with individuals who were convicted of knowingly providing, or attempting or conspiring to provide, “material support or resources” to a designated FTO. We also reviewed the data to identify the individuals who were convicted of knowingly receiving military-type training from or on behalf of an organization designated as an FTO at the time of the training. The data did not include defendants who were charged with terrorism or terrorism-related offenses but had not been convicted either at trial or by guilty plea, as of December 31, 2013. The data included defendants who were determined by prosecutors in Justice’s National Security Division Counterterrorism Section to have a connection to international terrorism, even if they were not charged with a terrorism offense. To assess the reliability of the convictions data, we performed basic reasonableness checks on the data and interviewed relevant agency officials to discuss the convictions and sentencing data. We determined that these data were sufficiently reliable for the purposes of our report. To identify the immigration restrictions and penalties imposed on individuals associated with or who provided material support to a designated foreign terrorist organization, we analyzed available data from State Bureau of Consular Affairs reports on visa denials between fiscal years 2009 and 2013, the U.S. Customs and Border Protection enforcement system database on arrival inadmissibility determinations between fiscal years 2009 and 2014, and information from the U.S. Immigration and Customs Enforcement on deportations between fiscal years 2013 and 2014. The Immigration and Nationality Act, as amended, establishes the types of visas available for travel to the United States and what conditions must be met before an applicant can be issued a particular type of visa and granted admission to the United States. For the purposes of this report, we primarily included the applicants deemed inadmissible under section 212(a)(3) of the Immigration and Nationality Act, which includes ineligibility based on terrorism grounds. We did not include the national security inadmissibility codes that were not relevant to terrorism. In each instance, we analyzed the data provided by the agencies and performed basic checks to determine the reasonableness of the data. We also spoke with relevant agency officials to discuss the data to confirm the reasonableness of the totals presented for individuals denied visas, denied entry into the United States, or deported from the United States for association with a designated foreign terrorist organization. We determined that these data were sufficiently reliable for the purposes of our report. We conducted this performance audit from April 2015 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Designated Foreign Terrorist Organizations, as of December 31, 2014 1. Organization Abu Nidal Organization (ANO) 2. Abu Sayyaf Group (ASG) 3. Aum Shinrikyo (AUM) 4. Basque Fatherland and Liberty (ETA) 5. Gama’a al-Islamiyya (Islamic Group) (IG) 6. 7. Harakat ul-Mujahidin (HUM) 8. 9. Kahane Chai (Kach) 10. Kurdistan Workers Party (PKK) (Kongra-Gel) 11. Liberation Tigers of Tamil Eelam (LTTE) 12. National Liberation Army (ELN) 13. Palestine Liberation Front (PLF) 14. Palestinian Islamic Jihad (PIJ) 15. PFLP-General Command (PFLP-GC) 16. Popular Front for the Liberation of Palestine (PFLF) 17. Revolutionary Armed Forces of Colombia (FARC) 18. Revolutionary Organization 17 November (17N) 19. Revolutionary People’s Liberation Party/Front (DHKP/C) 20. Shining Path (SL) 21. al Qaeda (AQ) 22. Islamic Movement of Uzbekistan (IMU) 23. Real Irish Republican Army (RIRA) 24. Jaish-e-Mohammed (JEM) 25. Lashkar-e Tayyiba (LeT) 26. Al-Aqsa Martyrs Brigade (AAMB) 27. al Qaeda in the Islamic Maghreb (AQIM) 28. Asbat al-Ansar (AAA) 29. Communist Party of the Philippines/New People’s Army (CPP/NPA) 30. Jemaah Islamiya (JI) 31. Lashkar i Jhangvi (LJ) 32. Ansar al-Islam (AAI) 33. Continuity Irish Republican Army (CIRA) 34. Islamic State of Iraq and the Levant (formerly al Qaeda in Iraq) 12/17/2004 35. Libyan Islamic Fighting Group (LIFG) 12/17/2004 36. Organization Islamic Jihad Union (IJU) 37. Harakat ul-Jihad-i-Islami/Bangladesh (HUJI-B) 39. Revolutionary Struggle (RS) 40. Kata’ib Hizballah (KH) 41. al Qaeda in the Arabian Peninsula (AQAP) 42. Harakat ul-Jihad-i-Islami (HUJI) 43. Tehrik-e Taliban Pakistan (TTP) 45. Army of Islam (AOI) 46. Indian Mujahedeen (IM) 47. Jemaah Anshorut Tauhid (JAT) 48. Abdallah Azzam Brigades (AAB) 49. Haqqani Network (HQN) 50. Ansar al-Dine (AAD) 54. Ansar al-Shari’a in Benghazi 55. Ansar al-Shari’a in Darnah 56. Ansar al-Shari’a in Tunisia 57. Ansar Bayt al-Maqdis 59. Mujahidin Shura Council in the Environs of Jerusalem (MSC) In addition to the contact listed above, Elizabeth Repko (Assistant Director), Claude Adrien, John F. Miller, and Laurani Singh made key contributions to this report. Ashley Alley, Martin de Alteriis, Tina Cheng, and Lynn Cothern provided technical assistance.
The Secretary of State, in consultation with the Secretary of the Treasury and the Attorney General, has the authority to designate a foreign organization as an FTO. Designation allows the United States to impose legal consequences on the FTO or on individuals who support the FTO. As of June 1, 2015, 59 organizations were designated as FTOs. GAO was asked to review the FTO designation process. This report provides information on the process by which the Secretary of State designates FTOs. Specifically, this report addresses (1) the process for designating FTOs, (2) the extent to which the Department of State considers input from other agencies during the FTO designation process, and (3) the consequences that U.S. agencies impose as a result of an FTO designation. To address these objectives, GAO reviewed and analyzed agency documents and data, and interviewed officials from Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as the intelligence community. Separately, GAO also reviewed the duration of the designation process for FTOs designated between 2012 and 2014. That information was published in April 2015 in a report for official use only. GAO is not making recommendations in this report. The Department of State (State) has developed a six-step process for designating foreign terrorist organizations (FTO) that involves other State bureaus and agency partners in the various steps. State's Bureau of Counterterrorism (CT) leads the designation process for State. CT monitors terrorist activity to identify potential targets for designation and also considers recommendations for potential targets from other State bureaus, federal agencies, and foreign partners. After selecting a target, State follows a six-step process to designate a group as an FTO, including steps to consult with partners and draft supporting documents. During this process, federal agencies and State bureaus, citing law enforcement, diplomatic, or intelligence concerns, can place a “hold” on a potential designation, which, until resolved, prevents the designation of the organization. The number of FTO designations has varied annually since 1997, when 20 FTOs were designated. As of December 31, 2014, 59 organizations were designated as FTOs, with 13 FTO designations occurring between 2012 and 2014. State considered input provided by other State bureaus and federal agencies for all 13 of the FTO designations made between 2012 and 2014, according to officials from the Departments of Defense, Homeland Security, Justice, State, and the Treasury, and the Office of the Director of National Intelligence, and GAO review of agency documents. For example, State used intelligence agencies' information on terrorist organizations and activities to support the designations. U.S. agencies reported enforcing FTO designations through three key legal consequences—blocking assets, prosecuting individuals, and imposing immigration restrictions—that target FTOs, their members, and individuals that provide support to those organizations. The restrictions and penalties that agencies reported imposing vary widely. For example, as of 2013, Treasury has blocked about $22 million in assets relating to 7 of 59 designated FTOs.
USMS operations cover five broad mission areas, including prisoner security and transportation, which is overseen by its Prisoner Operations Division (POD). The POD at USMS headquarters is responsible for managing the prison-related expenses, developing policy for district personnel when conducting prisoner-related operations, and supporting district activities to, among other things, identify cost-effective measures to house and care for prisoners. U.S. Marshals direct operations in 94 districts, and generally operate autonomously from headquarters. USMS’s prisoner operations activities are funded through two separate appropriations: the Federal Prisoner Detention (FPD) appropriation, and the Salaries and Expenses (S&E) appropriation. USMS uses FPD funding for the housing and care of federal prisoners in private, state, and local facilities. This appropriation also includes expenses related to prisoner transportation and medical care. The POD allocates funding from the FPD to district U.S. Marshals for their related prisoner costs, and is responsible for tracking the financial management of the FPD appropriation and monitoring district prisoner-related expenditures. USMS’s Office of Professional Responsibility, Compliance Review (OPR-CR) oversees the internal compliance review of USMS staff, division and district offices; the implementation of OMB Circular A-123; and ensures the integrity of the agency’s internal controls and the reliability of its financial reporting. OPR-CR is responsible for coordinating USMS’s assessments under the Federal Managers’ Financial Integrity Act (FMFIA), as well as planning and executing the A-123 assessments in support of management’s annual assertions of the organization’s internal controls effectiveness. The primary drivers of USMS’s detention expenditures are the number of prisoners in USMS custody, and the length of time they are held in detention. The average number of prisoners in USMS custody per day— the average daily population (ADP)—is directly influenced, among other things, by the activities and decisions of federal law enforcement, U.S. Attorneys, and the federal judiciary. For instance, as figure 1 demonstrates, USMS’s ADP in fiscal year 2015 was concentrated along the southwest border, reflecting law enforcement and prosecutorial priorities related to immigration. For a complete list of ADP by district for fiscal year 2015, see appendix II. Further, as figure 2 shows, USMS’s ADP peaked in fiscal year 2011 at 61,469, but fell to 51,670 in fiscal year 2015, a 16 percent decrease. According to USMS, this may be the result of factors such as reduced funding for federal law enforcement agencies, hiring freezes resulting from the sequestration that occurred in fiscal year 2013, and changes in prosecutorial practices and priorities stemming from the Attorney General’s Smart on Crime initiative, which is a set of actions directed at addressing DOJ’s ongoing issues related to prison overcrowding, costs, and recidivism. USMS does not own or operate its own detention facilities. Instead it relies on existing federal, state, and local infrastructure, and to some extent on private contract facilities, to house USMS prisoners. As such, USMS acquires bed space for prisoners through (1) use of beds at Federal Bureau of Prisons (BOP) facilities, for which USMS does not pay; (2) intergovernmental agreements (IGA) with state and local jurisdictions that have excess prison or jail bed capacity and with which USMS negotiates a daily rate for the use of a bed, and (3) private jail facilities with which USMS enters a fixed price contract based on a minimum number of prisoners it guarantees to house at a facility. In fiscal year 2015, USMS expended about $1.20 billion in payments to state and local government and private detention facilities. As illustrated in figure 3, this accounted for about 86 percent of the total $1.40 billion USMS expended through its FPD appropriation. Such payments cover prisoner housing, including meals, clothes and linens, and other incidentals associated with providing care for prisoners in USMS custody. In addition to prisoner housing payments, USMS expended about $115 million on medical care in fiscal year 2015—about 8 percent of spending during the fiscal year. Such expenses include health care services, transportation costs for moving prisoners to offsite medical facilities and the cost of external guards securing prisoners at these facilities. Transportation services was the third largest cost category, comprising an additional $53 million—4 percent of total costs in fiscal year 2015. The prisoner transportation category includes transportation services and guard costs associated with securing the prisoners during transportation. In addition, USMS spent about $24 million—or 2 percent of total costs in fiscal year 2015—on system-wide detention program expenditures, which include headquarters operations and information technology systems support. For more details on the trends in each of these cost areas, see appendix III. As figure 4 illustrates, from fiscal years 2010 through 2012, FPD nominal costs increased from $1.41 billion to nearly $1.54 billion, an increase of about 9 percent over the two-year period. By fiscal year 2015, costs dropped slightly below fiscal year 2010 nominal costs, with expenditures at about $1.40 billion. USMS officials attribute the decrease in costs to the decrease of ADP, indicating fewer prisoners to house from fiscal year 2012 through fiscal year 2015. To show changes in cost per prisoner, we adjusted the expenditures data to account for inflation changes for all 6 years. As figure 5 shows, our analysis of the inflation-adjusted FPD costs per prisoner—FPD costs divided by annual ADP—found that FPD per prisoner costs were highest in fiscal year 2015. USMS data show that ADP reached its peak in fiscal year 2011 at about 61,500, and has since dropped. USMS officials stated that per prisoner detention costs fluctuate for various reasons. For instance, USMS makes agreements with facilities based on future-year forecasts of ADP, including providing monthly minimum guaranteed costs for guaranteed space at certain facilities, where there is an anticipated need for additional prisoner housing in the future. In years when ADP did not meet forecasted amounts, USMS paid guaranteed minimum amounts for fewer prisoners than projected, leading to higher costs per prisoner. Additionally, USMS officials stated that in some circumstances, USMS continued to use some of these facilities even though it might not have been the most cost effective approach. USMS officials stated that they continue to use state and local government facilities in some districts to maintain relationships with law enforcement. This helps ensure that USMS can rely on these jurisdictions in future years for both prisoner operations and other operations requiring state and local cooperation, such as leveraging state and local law enforcement officials and resources to help capture fugitives. Specifically, the USMS officials explained that, if USMS does not help the jurisdictions to maintain their prisoner infrastructure, then some facilities in these jurisdictions would likely close. As a result, USMS would have fewer facilities available to detain federal prisoners in these jurisdictions, and fewer state and local personnel available to aid USMS with its other missions. USMS’s total medical costs also rose from fiscal years 2010 to 2015. While the USMS annual ADP decreased during this time period, nominal medical expenditures increased by 30 percent from $88 million to $115 million. USMS’s medical costs as a percentage of total prisoner costs also increased from about 6 percent to 8 percent. USMS officials stated that medical costs can fluctuate widely regardless of the number of prisoners based on the number and type of procedures, which can affect the total costs expended in the FPD. USMS officials stated that it had more expensive medical procedures, such as more heart, diabetic, and optical procedures, to cover in 2015 than in 2010. In general, districts with larger prisoner populations have more costs than districts with lower prisoner populations. Specifically, the 10 districts with the highest average ADP for fiscal year 2015 accounted for about 50 percent of the average daily detention population for USMS. These 10 districts also accounted for about 49 percent of total expenditures among all districts. In addition, we found that the 5 districts along the Southwest border with Mexico had both the highest ADP and the highest attributed costs among the districts. See figure 6 for a breakout of costs attributed to district operations. For a list of district costs for housing, medical, and transportation costs, see appendix II. Our analysis shows that ADP does not entirely explain the cost trends among districts. For instance, among the 10 districts with the highest ADP, 4 of them account for only about 4 percent of total district costs among all 94 districts, while the other 6 districts account for almost 45 percent of the remaining total district costs. As figure 7 shows, this is in large degree because these 4 districts—the Southern District of New York, the District of Puerto Rico, the Southern District of Florida, and the Central District of California—rely heavily on federal facilities operated and paid for by BOP, not USMS, placing between 73 and 87 percent of the ADP in a BOP facility in a given year. The remaining 6 districts, however, rely less heavily on federal facilities—for which USMS does not pay—and more heavily on a mixture of IGA, and private infrastructure to house its prisoners. Moreover, 5 of the 6 districts with the highest ADP that rely more heavily on state and local or private facilities are also located along the southwest border. USMS officials stated that bed space in locations with the highest ADP—such as along the southwest border—often overtake the federal, state, and local facilities’ capacity. In response, USMS has entered into contracts with private facilities to meet the demand. According to USMS officials, while private facilities appear to be more expensive overall, most are located in districts where the costs of bed space are already more expensive than average because the demand outstripped capacity. Thus, paying for private facility capacity requires paying higher costs. In addition to identifying the 10 districts with the highest ADP for fiscal year 2015, we also identified the 10 districts with the highest costs per day—that is, prisoner housing, medical, and transportation costs directly attributable to each district divided by the average daily detention population—which are listed in table 1. Similar to our analysis of the districts with highest ADP, districts with the highest costs per day is likely affected by whether they can use a BOP facility to offset housing costs. In particular, as table 1 shows, of the 10 districts with the highest costs per day, none rely on BOP facilities to house more than 4 percent of their prisoner populations. Further, only 2 of the 10 districts rely on private facilities to house more than 10 percent of their prisoner populations. USMS officials stated that other factors affect the variation of costs. For instance, officials explained that variations in the prevailing wage rates in the district impact housing costs greatly. USMS officials stated that wage rates in northeast districts and Alaska are higher than in other districts such as in the southeast districts. Further, officials noted that real estate costs in different areas of the country can greatly affect how much USMS must pay. For instance, USMS officials stated that districts with large metropolitan areas, such as Massachusetts and Maryland, pay higher real estate costs than in locations that are more rural. Lastly, USMS officials stated that lower ADP in districts in more remote locations results in higher costs per ADP because there are some structural costs that then are shared among fewer prisoners. For instance, Alaska and Maine rank among the lowest ADPs on average, and the per day jail costs include higher indirect costs such as maintenance of the prisoner facilities. Such variations may affect jail costs per day among all the districts to some extent; therefore, it is difficult to compare costs among the districts without considering such pressures on cost. USMS has implemented a number of actions to manage costs and meet its strategic goal of optimizing detention operations, which it estimates have achieved costs savings in fiscal years 2010 through 2015. Specifically, USMS automated its detention management services, developed housing options intended to reduce costs, invested in alternatives to pre-trial detention to help reduce housing and medical expenditures, and improved its management of medical claims. Table 2 provides detail on the key cost saving initiatives that USMS has identified and USMS’s estimated total cost savings. In addition to the above initiatives, officials explained that USMS has sought to avoid costs by increasing USMS’s use of federal facilities. Doing so allows USMS to decrease costs because, according to a USMS- BOP memorandum of understanding, BOP allocates and maintains detention bed space to house USMS’s prisoners, and USMS does not incur housing related costs for the use of these federal spaces. USMS officials explained that they have not developed a cost savings estimate for the BOP bed space USMS uses because USMS does not consider its use of BOP facilities as a cost saving action. Officials, however, noted that USMS monitors unused federal bed space and calculates additional costs USMS could avoid if districts were to use those unoccupied spaces. For example, USMS estimated it could have avoided an additional $21.6 million in costs if districts had utilized the unused BOP-allocated spaces in the Brooklyn federal detention facility in fiscal year 2015. However, according to USMS officials, operational limitations such as a federal facility’s distance from assigned courthouses hinders USMS ability to fully use all allocated spaces. From fiscal years 2010 through 2015, USMS increased the percentage of its prisoner population that used BOP facilities from about 18 percent of total ADP in fiscal year 2010 to about 19 percent in fiscal year 2015. Further, our analysis shows that USMS avoided costs ranging from $321 million to $392 million for fiscal years 2010 through 2015, if USMS had to pay for bed space it used at BOP federal facilities. In addition, our analysis found that the Department of Justice potentially saved $73 million in fiscal year 2015 by having USMS use allocated space at BOP facilities to house its prisoners instead of housing those prisoners at private facilities. According to USMS’s congressional budget justification and USMS officials, the agency has realized approximately $858 million in total costs savings through the cost savings initiatives identified in table 2. However, based on our analysis of USMS’s cost savings estimates, discussions with USMS officials, and comparison of the estimates against Office of Management and Budget (OMB) and GAO guidance related to cost estimation, we found that approximately $654 million of USMS’s total cost savings estimate has limited reliability because five of USMS’s six cost savings estimates were not sufficiently comprehensive, accurate, consistent, or well-documented. Specifically, based on guidance from OMB and GAO guidance for assessing the reliability of computer processed data, reliable cost estimates—such as USMS’s $858 million estimate—should be comprehensive, accurate, consistent, and well-documented. In particular, OMB guidance on conducting a cost-benefit analysis states that the analysis should include a comprehensive estimate of different types of benefits (such as cost savings) minus costs. OMB guidance further states that the analysis should be explicit about the underlying assumptions and key sources of uncertainty used to arrive at the estimates of future benefits and costs. Key data, models used in the analysis, and results of benefits and costs should be reported and well- documented to promote independent review and analysis. Further, according to guidance for assessing the reliability of computer processed data, including estimates and projections, data are reliable when data are reasonably complete, accurate, and consistent—a subcategory of accuracy. Table 3 shows the extent to which USMS’s estimates were reliable and, if appropriate, limitations of the estimates, and details of our analysis of the estimates by cost saving action follow. eIGA: Based on our analysis, USMS applied reasonable assumptions and used a reasonable methodology to reliably estimate $204.3 million in savings from the implementation of eIGA. Specifically, USMS calculated the difference in the “proposed” versus “negotiated” per diem rate for each intergovernmental agreement which was negotiated using this system. Additionally, savings identified can be solely attributed to the implementation of the system because, prior to eIGA, USMS did not negotiate the per diem rate for housing prisoners. ePMR: Based on our analysis, we found that USMS applied reasonable assumptions, but its estimate of $935,000 in cost savings during fiscal years 2011 through 2015 from the implementation of ePMR is not comprehensive. Officials said that as a result of implementing ePMR, USMS has avoided $187,000 in costs per year by not having to hire additional staff to manage the increased number of medical claims, which have increased since then. They based this estimate on the number of cases USMS headquarters managed in fiscal year 2011. However, we found that the number of medical cases USMS headquarters managed has increased since fiscal year 2011. As a result, USMS would have needed approximately 6 additional staff to manage the average number of medical claims in fiscal years 2011 through 2015. Further, USMS costs avoided over the five fiscal years would equal approximately $2.7 million, not $935,000. We found that USMS underestimated its ePMR costs savings because it excluded efficiencies and savings realized in subsequent years. OMB guidance recommends agencies include a comprehensive analysis of benefits and costs. A savings estimate that includes savings realized in all 5 fiscal years could help USMS identify the full range of the program’s effect. eDesignate: Based on our analysis, the cost savings estimate of $222 million for the implementation of eDesignate is not comprehensive because USMS may have double counted savings associated with the use of the system over time and included savings not attributable to the system. Specifically, USMS officials told us that eDesignate reduced the post-sentencing processing time for prisoners in USMS’s custody, decreasing prisoner average detention time, and, thereby, USMS’s housing costs. Figure 8 illustrates the processing of sentenced prisoners using eDesignate. To capture cost savings achieved after the use of eDesignate, USMS derived a baseline detention time using the average detention time from fiscal years 2008 through 2010—73.4 days—to which post-implementation average detention time could be compared; calculated the difference in average detention times between the baseline and monthly average detention time from FY 2011 through 2015; and multiplied this difference by the average daily costs of housing its prisoners. OMB guidance states that benefits and costs analysis should be based on incremental benefits and costs. Specifically, all sunk costs and benefits already realized should be ignored. However, our analysis shows that USMS may have double counted the time reductions and cost savings achieved over time by continuing to use the same 73.4 days baseline to calculate change in detention time and cost savings achieved each year after the implementation of eDesignate. Specifically, as shown in table 4 the estimated amount of cost savings is greater when USMS continues to measure against the 73.4 days baseline instead of revising the baseline each year to account for reduced detention time achieved in the preceding years. For instance, if USMS used the change in annual average detention time to calculate costs savings for fiscal years 2011 through 2015, it would have estimated approximately $52 million versus $222 million. Thus, using this baseline may overstate the savings achieved from reduced detention time. Further, USMS did not take into account any other factors which might also have affected a change in average detention time and, ultimately, savings estimates related to the system. For example, officials said that a BOP contract closure at a facility impacted USMS housing of its prisoners and resulted in the high average detention time for fiscal year 2011. In addition to double counting savings in its cost estimating methodology, USMS included savings not attributable to the implementation of the eDesignate system. Specifically, USMS calculated detention costs avoided for the total number of USMS’s prisoners instead of prisoners housed in non-BOP facilities. As discussed above, USMS derived a costs savings estimate for eDesignate by multiplying the reduction in prisoners’ detention time by the daily costs of housing the prisoners. However, USMS does not incur costs for USMS prisoners housed in BOP facilities, so calculating costs avoided for all prisoners in its savings estimate resulted in an overestimation. USMS officials stated that they do not think that the $222 million in savings is overestimated. They said that USMS did not need to adjust the baseline to reflect incremental yearly changes in detention time and costs savings because it was estimating the costs USMS would have incurred without the implementation of eDesignate, not the impact of the system on prisoner processing time. OMB guidance, however, states that benefit-cost analyses should measure the incremental benefits and costs by omitting costs or benefits already realized. A baseline which adjusts to capture the actual change in average detention time would better capture incremental benefits and could help USMS identify events that affected detention time and more accurately estimate eDesignate’s effects and costs savings. Chesapeake Detention Facility: USMS’s cost savings estimate of $53.6 million for the Chesapeake Detention Facility includes $13.6 million in transportation and medical costs avoided and $40 million in housing costs avoided as a result of USMS having guaranteed use of the Chesapeake detention facility. Though USMS conducted sensitivity analyses for its housing cost savings estimate, our analysis found that the estimate has limited reliability because it is not accurate. First, USMS may have overestimated the cost savings associated with transportation and medical costs because it did not account for the fixed costs for medical and transportation already included in its payments for the Chesapeake Detention Facility. In particular, USMS used the local average transportation costs for transporting USMS prisoners and the average daily medical costs per prisoners to estimate that it would have had to pay $13.6 million in transportation and medical costs if such costs were not included in the agreement with the facility. It identified the entire estimated transportation and medical costs avoided as the savings. However, this estimate may overstate the cost savings because USMS’s methodology did not account for an estimate of how much of the fixed costs it currently pays for the Chesapeake Detention Facility are attributable to transportation and medical costs. As previously noted, USMS pays a fixed cost for housing its prisoners at the Chesapeake Detention Facility, which includes medical and transportation services for USMS’s prisoners housed at the facility. If such costs were not included in the fixed costs USMS paid for the facility, USMS may have been able to negotiate a lower cost. However, USMS’s methodology does not account for how the negotiated fixed costs for the facility would have changed if medical and transportation services were not included. Accounting for how the fixed costs would have changed would provide a more accurate estimate of the actual medical and transportation costs it did not have to pay as a result of the agreement. For example, if the fixed costs including medical and transportation USMS pays for the Chesapeake Detention Facility are $20 million and USMS estimates that it could have negotiated a fixed rate without medical and transportation of $18 million, then the estimate of fixed costs USMS currently pays that are attributable to medical and transportation costs is $2 million. If this were the case, then, after accounting for the $2 million currently attributable to medical and transportation costs, the costs savings would have equaled $11.6 million ($13.6 million less the $2 million) versus the entire $13.6 million estimate of costs avoided. Second, USMS’s housing cost savings estimate for the Chesapeake Detention Facility is inaccurate because USMS inconsistently applied the inflation rate in its $40 million savings estimate. Specifically, USMS calculated savings using the difference between the cost of operating Chesapeake and the costs of not having the guaranteed use of the facility. To calculate the growth in costs over time for each scenario, USMS assumed a 3 percent inflation rate, but applied the rate inconsistently. Specifically, it applied a 3 percent inflation rate once every three years for the change in costs to operate Chesapeake, but applied a 3 percent inflation rate every year for the change in housing costs if USMS did not have the guaranteed use of the facility. As a result of this inconsistency, USMS generally projected that the costs for operating the Chesapeake facility would be lower when compared to the costs of housing their prisoners if they did not have the guaranteed use of the facility, and this overestimated costs savings achieved. Further, USMS assumed a 3 percent inflation rate every 3 years instead of using the general inflation rate, as is recommended by OMB guidance. We found that USMS inaccurately estimated the medical and transportation costs avoided as a result of its guaranteed use of the Chesapeake facility because USMS did not prioritize the development of the estimate. Specifically, officials said USMS developed its housing estimate to show that acquiring the guaranteed use of Chesapeake was an economically sound housing decision, and any additional savings were secondary. As such, USMS did not focus on the medical and transportation costs that it avoided and developed the estimate in response to our inquiry. Additionally, USMS officials acknowledged that they mistakenly applied an inconsistent inflation rate, potentially resulting in an overestimation. They noted, however, that USMS assumed a 3 percent inflation rate instead of the general inflation rate in its estimate because that is the average rate officials have observed over time. Improved Management of Medical Claims: We found that the $2.4 million in costs savings related to the improved management of medical claims and costs has limited reliability because it is not accurate or comprehensive. USMS’s reported savings is comprised of three categories: (1) $1.4 million in savings from effectively managing costs for prisoners receiving medical care; (2) $740,962 from denied claims; and (3) $279,360 from medical transport costs avoided. We found that USMS’s costs savings related to effective management of prisoners’ medical care and denied claims may be inaccurate because USMS used the upper bound of costs ranges and average costs, respectively, to estimate the savings for each category. For savings related to effective management of prisoner medical care, in at least one quarter of its estimated savings, USMS reported a range of cost savings rather than a single estimate. For example, USMS estimated a range of $20,000 to $50,000 for costs avoided for a surgery. Because USMS used the upper bound of each range to estimate total costs avoided for the effective management of prisoners’ care, USMS may have overestimated its savings. Similarly, USMS savings for denied claims may be inaccurate because it used average costs per approved claim instead of the actual costs of denied claims to estimate costs avoided. Specifically, USMS multiplied the number of denied claims by the average costs per medical claim they had approved to determine total costs avoided. However, we found that average costs per claim approved may not be a good proxy for costs per denied claim. According to officials, medical costs can vary widely according to each individual case. Such variations in costs can affect the average costs per claim and thus USMS’s calculated cost savings. For instance, USMS calculated that average costs per claim in two quarters in fiscal year 2015 ranged from $220 per claim to $503 per claim for each quarter. Applying these two different averages to 300 denied claims, we found that there is a large difference in estimated savings—approximately $66,000 and $150,900, respectively, or over 1.25 times difference between the low and high calculated savings. As a result, using average costs per approved medical claim may not be representative of the actual costs for each claim denied and may under- or over-state actual savings. Additionally, a review of the documents USMS provided us shows that USMS calculated savings for the actions that overlapped two of the saving categories, thus potentially double counting some savings. For example, USMS claimed approximately $90,000 in cost avoided for effectively managing a prisoner medical case. However, this cost avoidance resulted from a denied claim. As a result, this singular action would also be counted as savings in the denied claims category. USMS officials noted that it would be work intensive for USMS to calculate costs avoided for denied claims by using actual costs given the volume of medical claims they receive and the average cost per claim can be quickly calculated and multiplied by the number of prisoner medical claims denied to facilitate cost savings reporting. We recognize that calculating actual costs may be challenging; however, USMS already uses actual costs to estimate the costs savings related to the effective management of prisoners’ medical care. Thus, USMS may use the same method to estimate savings for denied claims. Further, data reliability guidance states estimates are accurate when recorded data reflects the actual underlying information. A more accurate and comprehensive savings estimate—calculating both the lower and upper bound of cost estimates, using actual costs for claims denied, and ensuring that savings are not double counted—could help USMS better determine the full impact of its action on its rising medical care costs. Alternatives to Pre-Trial Detention: We also found that USMS’s cost savings estimate of approximately $375 million from the alternatives to pre-trial detention program—for fiscal years 2010 through 2015—had limited reliability because USMS lacked adequate documentation to support the estimates, did not validate estimates, and reported inconsistent savings estimates. As described earlier, the Administrative Office of the U.S. Courts (AOUSC) administers the alternatives to pre-trial detention program which helps to divert defendants from detention in USMS’s custody. According to the agreement between AOUSC and USMS, AOUSC is to provide USMS with a report that includes the number of prisoners who otherwise would have been detained, describes the types of services provided, and includes the total expenditure from USMS’s allocated funds. USMS officials reported that AOUSC had provided such reports, which also estimated the housing costs USMS avoided as a result of the program. However, USMS officials stated that they have not received reports with this information from AOUSC for fiscal years 2012 onward to support or corroborate USMS’s reported estimate of $67 million per year in savings for fiscal years 2012 onward. Our review of the fiscal years 2010 and 2011 reports that included estimates of housing costs avoided that we received from USMS found that AOUSC aggregated some of the data to determine USMS detention costs avoided as a result of the program, but did not specify the methodology for aggregating the data or the assumptions used to derive different factors in the estimate, as recommended by OMB guidance.As such, we cannot determine if the method for estimating USMS cost savings is reasonable. Additionally, USMS officials said that USMS did not verify AOUSC’s calculation of fiscal years 2010 and 2011 savings. USMS, however, reported savings for those and subsequent fiscal years in its congressional budget justifications. USMS officials told us that USMS extrapolated AOUSC’s estimations from fiscal year 2011 to report on more recent savings. Officials said that, generally, USMS gets a savings of $10 for every dollar AOUSC expends from USMS-allocated funds for the program. However, we found that since fiscal year 2011, AOUSC expended less than $3 million of USMS-allocated funds, but USMS continued to report a $67 million per year cost savings, which was estimated based on AOUSC fiscal year 2011 expenditures of approximately $3 million. Further, USMS reported different savings for fiscal year 2011, $44 million in fiscal year 2011 versus the $67 million it reported in its fiscal year 2013 and 2014 congressional budget justification, indicating that the estimates are inaccurate. It is likely that USMS’s costs savings for the program have decreased from $67 million, given the decrease in use of allocated funds by AOUSC. AOUSC acknowledged that, since a change in its staff in 2012, the reports it provided to USMS did not include information such as the number of prisoners who otherwise would have been detained, which would have been required to estimate USMS’s cost avoided, but instead provided detailed program expenditures to USMS for fiscal years 2012 onward in order to seek reimbursement. Similarly, USMS staff acknowledged that they had not sought reports from AOUSC that would have allowed them to calculate costs avoided, and had not verified any of AOUSC’s prior calculations. Both AOUSC and USMS officials stated that they intend to communicate with each other to obtain the information detailed in the agreement, and USMS officials indicated that they plan to validate the cost savings in the future. However, USMS officials did not provide documentation or a timeframe in which they will do so. Further, OMB guidance states that it is potentially valuable for agencies to verify and determine whether anticipated benefits and costs for the program have been realized. This verification can be used to determine necessary corrections in the program, and to improve future estimates of benefit and costs. Also, by ensuring that it has complete and validated information necessary to estimate costs avoided, documenting its methodology, and assuring that its estimates are consistent over time, USMS would be better able to report reliable costs avoided for the alternatives to detention program. As described above, five of USMS’s costs savings estimates have limited reliability because the estimates were not sufficiently comprehensive, accurate, consistent, or well-documented. By developing reliable methods for estimating and validating cost savings—such as ensuring estimates are comprehensive, accurate, consistent, and adequately documented— USMS would be better positioned to assess the effectiveness of its cost savings actions and inform decision makers—including Congress—about these efforts. USMS has several systems it uses to help it identify cost savings opportunities, including: Strategic Plan. USMS’s 2012-2016 Strategic Plan helps guide the agency in fulfilling its mission and achieving its strategic goals. One such strategic objective is to provide for the safe, secure, humane, and cost- effective containment of its prisoners, and one of the performance goals it uses to achieve this objective is to hold detention and transportation costs at or below inflation. According to the strategic plan, one of the ways it seeks to meet this goal is by enabling effective and equitable allocation of district resources for transportation expenditures. For example, according to USMS POD officials, they have implemented a process as a result of guidance in the strategic plan which allows them to reallocate resources at the district level for guard and transportation costs when unexpected costs are incurred by the districts. USMS initially allocates money each fiscal year across the 94 USMS districts, but in addition it sets aside separate funding to cover unexpected costs such as transporting ill prisoners outside of facilities where they are housed for further medical care. While USMS can anticipate that these events will occur, it cannot foresee which districts will incur these costs. Districts’ requests for additional funding beyond their fiscal year allocations are submitted via a supplemental funding request that is reviewed by POD, which then grants the request and provides the additional funding to the district. In addition, POD is currently developing a policy which will allow it to determine a methodology to more effectively and equitably distribute transportation resources across the districts. This initiative is expected to be rolled out in October 2017, according to USMS officials. USMS guidance to districts. USMS’s Policy Directive 9.2 establishes how USMS districts will house prisoners in different types of facilities. Specifically, it states that districts must first use a BOP federal facility where there is space available, as USMS does not have to pay for these spaces. In 2007, USMS signed a memorandum of understanding in which BOP allocated a certain amount of bed space to USMS prisoners at more than a dozen of its federal facilities. In fiscal year 2015, BOP housed approximately 10,000, or 19 percent, of USMS’s prisoners in BOP facilities. Next, USMS districts must, according to the directive, use space available in state and local facilities for which USMS has established IGAs and a per diem amount to pay for each prisoner. Third, the guidance directs districts to use private facilities. In addition to this guidance, however, POD officials noted that they guide districts to consider private facilities with space where USMS has a “guaranteed minimum” number of spaces it is paying for, before the districts consider state and local facilities (the IGAs). This is because, if USMS exceeds the guaranteed minimum in the contract, the contractor provides a dramatically reduced per diem cost per detainee above the guaranteed minimum contract amount. The officials noted that fewer than 30 of USMS’s districts use private facilities, and that private facilities account for the smallest percentage of facilities that USMS uses to house its prisoners. Our analysis confirmed this assessment, finding that there were only 21 districts using private facilities from fiscal years 2010 through 2015 to house at least a half of 1 ADP during each fiscal year. However, our analysis of USMS detention data also found that some districts appeared to select private detention space over less costly federal spaces, in seeming contradiction with USMS guidance. For example, several districts that have access to one federal facility place a large number of prisoners in other private facilities at a higher cost to their districts. POD officials stated that based on the number of factors U.S. Marshals must consider in placing prisoners in available bed spaces, it is not always feasible to use the available federal detention spaces. POD officials also told us that they provide additional guidance to U.S. Marshals in those districts that have access to or are the home district for private facilities. For example, according to the officials, the U.S. Marshal in each district must consider issues like the security risk the prisoner poses, the prisoner’s medical condition, the need to separate defendants on a particular case, and the need to keep prisoners close to the courthouse where they are making their appearances. In addition, even if USMS presents a prisoner to a BOP federal facility, BOP has the right to refuse to accept a prisoner at one of its facilities. According to BOP, this is only done in cases where the facility cannot accommodate a particular prisoner due to medical or security issues. Overall, BOP officials told us they try to accommodate USMS, even in facilities where USMS has exceeded its allocation at a particular facility. Thus, while some districts make placement decisions that do not comport with the policy directive as written, these occurrences are infrequent, and are practiced by a minority of districts that have both private and federal prison spaces available to them, and are following additional guidance provided to them by POD in making their determinations on where to place specific prisoners. Scorecards. POD tracks district utilization of federal and private bed spaces through quarterly scorecards. According to USMS officials, they encourage districts to utilize federal and private bed space, and the quarterly scorecard system is their way of checking on the district’s performance in cost-efficient bed space allocation. These scorecards reflect which private and federal facilities are being underutilized, and at what percent or rate. The scorecard lists each federal and private facility, the USMS allocation or number of bed spaces for each facility, and the actual amount of ADP in that facility (USMS’s use). Scorecards are color coded green, yellow, and red based on whether the district is meeting the USMS allocated amount of ADP in the facility. If a facility on the score card is under its allocation, or “in the red,” POD does an assessment of which facilities that district is using to house prisoners. While POD officials stated they cannot dictate to a U.S. Marshal which facility to use for a specific prisoner, they noted that if they find, for example, that a district is using an IGA or a higher priced facility rather than a facility with guaranteed minimum bed spaces, then POD officials call the district to provide coaching on utilizing allocated bed space. In addition to monitoring utilization levels at detention facilities, USMS also calculates a cost avoidance amount based on the amount of space it should be using at these facilities, and the amount of space currently being occupied by its prisoners. According to USMS officials, it performs calculations to determine its potential cost avoidance numbers for private and federal facilities as part of its ongoing monitoring and prisoner reassignment efforts. USMS officials noted that their current monitoring and prisoner reassignment efforts are ad hoc, but the agency is currently working to formalize its use of the scorecards and its facility utilization review process. USMS officials stated that they are currently working to formalize the scorecard system and monitoring, and expect to be able to begin monthly reviews in June 2016. Internal control. We reviewed the elements of USMS’s internal control system that are designed to specifically provide USMS with opportunities to identify cost efficiencies and generally found that its internal control processes align with Standards for Internal Control in the Federal Government and the Office of Management and Budget’s (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, which defines management’s responsibility for internal control in federal agencies. Internal control is an integral component of an organization’s management that is to provide reasonable assurance that objectives are achieved, including the efficiency of operations. We focused our review on USMS’s internal control objective related to achieving operational efficiencies. Table 5 provides examples of USMS’s specific internal control processes, organized by standard. We did not independently test USMS’s internal controls to determine whether they mitigate all possible risks and are operating as intended. We found that USMS has designed an internal control system which could help it to identify opportunities to achieve operational efficiencies, including on-site compliance reviews where each district is assessed every 6 to 7 years. In addition, each district is to test its internal control over the efficiency of its operations through a standardized, annual self- assessment process. However, agency officials reported that USMS does not have a way to aggregate or analyze the results of these self- assessments, which are the only reviews available for each district each year. According to USMS officials, their current process includes tools such as Sharepoint that are not able to aggregate the self-assessment data, or run any type of data analytics. They also stated that they completed a business process analysis in fiscal year 2015 that may help them compile the findings of the reviews, but they are still unable to aggregate the results. The officials said that they recognize the need for an integrated system that would allow them to compile the self-assessments, corrective action plans, and compliance reviews. Officials also stated that currently, they have four different systems in which they have to manually input information. They stated that integrating these into one overall system would increase productivity, accountability, and USMS’s overall compliance rate. In addition, according to these officials, having a data analysis capability would allow USMS to detect deficiency trends and patterns, which could increase and enhance its reporting capabilities. For example, USMS could report on a quarterly basis which would enable it to more closely monitor district compliance rates. Because USMS cannot aggregate or analyze the annual SAG self- assessment results, which it relies on for those districts not being assessed during USMS’s district review cycle, it cannot identify whether the same control deficiencies are occurring across districts or in the same districts over time, hindering its ability to promptly resolve these issues or to identify agency-wide deficiencies and develop corrective actions in key risk areas. For example, one control activity that is to be tested regularly is whether the district reviews purchase cardholder statements to ensure that only authorized goods and services are purchased, and that no purchase exceeds a set threshold. With no ability to aggregate self- assessment results to identify whether a deficiency in an area such as this is occurring across many districts in a given year, or across the same district over a number of years, USMS may not be able to promptly resolve inaccurate purchases, potentially resulting in issuing payments that are higher or lower than they should be. A 2012 DOJ Office of Inspector General (OIG) audit of USMS also similarly commented on USMS’s on-site review timeline, finding that USMS had not ensured that district and division procurement officials were complying with federal, DOJ, and USMS policies, and that these noncompliance problems resulted from an inability to effectively manage and oversee those procurement activities at the district and division level. As such, the OIG recommended that USMS strengthen its inspection and review of certain activities by shortening the length between on-site reviews of operations in the district and division offices. USMS officials told us that they are continuing to work on implementing the recommendation to ensure that they are performing their on-site reviews closer to a 3 to 4 year cycle, which is standard among other agencies. USMS has already improved its on-site compliance review cycle from past years. Currently, officials stated that USMS reviews each district every 6 to 7 years as compared to every 12 years in 2012, and it increased the number of on-site reviews from 11 to 14 per year in 2012 to 18 per year now. According to USMS officials, they are on track to perform 16 on-site reviews in fiscal year 2016, and are continuing to increase the number of on-site reviews in response to the 2012 OIG recommendation. However, in the years between the on-site reviews, USMS relies on information from the annual self-assessments for each district to identify deficiencies and develop needed corrective actions. According to Standards for Internal Control in the Federal Government, internal control monitoring assesses the quality of performance over time and promptly resolves the findings of audits and other reviews. Corrective actions are a necessary complement to control activities in order to achieve objectives. By developing a mechanism that would allow them to aggregate and analyze results from the annual self-assessments, USMS would be better positioned to more consistently and comprehensively identify deficiencies and monitor corrective actions across districts and over time that could result in additional opportunities to achieve cost savings and efficiencies. USMS provided for the care of over 50,000 federal prisoners daily at a cost of about $1.4 billion in fiscal year 2015. In managing these funds, USMS has taken steps to leverage and identify opportunities to achieve cost savings and efficiencies. Such actions include the implementation of detention management systems, the support of AOUSC’s Alternative to pre-trial detention, and the implementation of a scorecard system to track district use of private and federal facilities in order to identify opportunities for cost efficiencies. However, USMS does not fully know how much its actions have saved because it has not developed reliable and transparent methods for estimating costs savings. In addition, it has not established a consistent and reliable mechanism for reviewing results of various operational assessments at the district level, which hinders its ability to consistently and comprehensively identify deficiencies and monitor corrective actions across districts and over time. Establishing such mechanisms and developing more reliable methods to estimate cost savings could help USMS to resolve its noted deficiencies more promptly, more accurately report savings it has achieved to the Congress, and ultimately allow it to operate more efficiently and effectively. To ensure that costs savings estimates are reliable, we recommend that the Director of the USMS direct its Prisoner Operations Division to develop reliable methods for estimating cost savings and validating reported savings achieved. To enable USMS to more consistently identify deficiencies and monitor corrective actions, we recommend that the Director of the USMS establish a mechanism to aggregate and analyze the results of annual district self-assessments. We provided a draft of this report to DOJ and the Administrative Office of the U.S. Courts (AOUSC) for review and comment. Liaisons from DOJ and USMS responded in an email that DOJ had no formal comments on the report, and concurred with the recommendations. The AOUSC liaison also responded in an email that AOUSC had no written comments on the report. The USMS liaison provided technical comments, which we incorporated as appropriate. We are sending copies of this report to DOJ, AOUSC, appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix IV. We addressed the following questions as part of this review: (1) What are the primary costs associated with United States Marshals Service (USMS) prisoner operations, and what have been the trends in spending from fiscal years 2010 through 2015? (2) What recent actions has USMS taken to reduce its prisoner operations costs and how much has been saved? (3) To what extent does USMS have systems in place to identify additional opportunities to save costs? To identify costs and trends, we reviewed USMS’s congressional budget justifications covering fiscal years 2010 through 2015 to identify USMS- reported cost drivers. We selected this time period because we believe that 6 years is sufficient time to identify trends in prisoner operations costs, and GAO last reported on USMS prisoner costs in fiscal year 2010. We focused our review on the USMS Federal Prisoner Detention (FPD) appropriation, which pays for about 85 percent of total prisoner- related costs. We obtained underlying data from USMS and aggregated it at the district level. In particular, we obtained operational data for USMS’s prisoner activities from fiscal years 2010 through 2015, specifically obtaining detention population counts per year for all prisoner facilities by USMS district and daily detention population counts for private and fixed rate facilities. We also obtained financial data that pertained to housing, medical, and transportation costs, which we then aggregated at the district level. This included the housing costs for each facility, including per diem agreements for applicable state and locally- managed facilities and contract rates for facilities with guaranteed minimum terms. While USMS districts do not manage private prison facility contracts, we prorated the cost of private prison usage by district by determining the proportion of the total annual average daily detention population in private facilities associated with the district and dividing the total private facility costs by that proportion. To determine medical costs, we obtained district-level medical services information, medical guard services costs per district, and medical-related transportation costs per district, which we summed to obtain district level costs. To determine transportation costs, we obtained and summed costs per district for in-district transportation guard support and costs for other contract-rate guards. We also obtained Justice Prisoner and Alien Transportation System (JPATS) air and ground transportation cost information from USMS for JPATS’s USMS prisoner operations, and attributed ground transportation costs to the respective district responsible for the prisoners moved. However, we were unable to determine the costs of JPATS air support by district, as the agency does not collect information or manage the JPATS air program to attribute such costs by district. We assessed the reliability of these data and, and we found some inconsistencies with the data. This reliability assessment included conducting checks for completeness and logical consistency, obtaining documentation on systems end-user capabilities and data control, interviewing data users and managers responsible for maintaining data, and comparing data to previous USMS reported data. The inconsistencies we identified included inconsistencies in reported costs by facility managed by state and local facilities and calculated costs based on the reported per diem costs and the average daily population attributed to the facilities. Further, we identified missing or inconsistent facility designations that led to differing costs by facility. However, we were able to address these inconsistencies and determined that they did not greatly impact cost data for district prisoner operations. We also found detention population data were missing for less than five days for each fixed-rate facility (including private facilities and two state and local fixed- rate facilities). We found that the missing data did not severely impact our calculation of costs and were able to account for the missing data in our cost calculation. Because of these inconsistencies, USMS deobligations in prisoner operations-related funding in later years, and differences due to rounding, USMS-reported costs and annual average daily population differ slightly from the calculated costs and populations in this report. We found these differences to be minimal, affecting total costs by less than 2 percent in fiscal year 2012 and less than 1 percent in all other years except for in fiscal year 2015, where USMS had made obligations for costs to be incurred in fiscal year 2016. However, these obligations have been removed from the data as they were not in the scope of the review. The result was a difference in total costs of less than 1 percent in fiscal year 2015. Therefore, we found the data to be reliable for the purposes of identifying and describing the primary cost drivers and the districts’ relative prisoner operation costs from the FPD. In addition, we interviewed USMS Prisoner Operations officials to obtain USMS’s views on identified cost drivers and trends. We corroborated USMS headquarters officials’ views by conducting interviews with USMS officials in selected districts. Specifically, we conducted interviews with 3 different USMS districts—the Southern District of California, the Northern District of Georgia, and the District of Maryland—to obtain field office views on the costs and trends occurring over the past 6 years. We chose these districts because of geographical disparity, size of prison populations, and unique actions taken or ancillary missions conducted at the district. Specifically, the Southern District of California has on average one of the largest prison populations of any district and works with numerous types of facilities to house prisoners. The Northern District of Georgia has one large private facility and also serves as a transportation center in the southeast for neighboring districts to transfer prisoners to different facilities throughout the country. The District of Maryland is the only district to have a state-owned facility currently participating in the capital improvement program, a program where USMS provides funding for improvements to state and local facility infrastructure. While the information we obtained from our site visits is not generalizable to all USMS districts, it provides insights into costs and trends in prisoner operations. To determine the recent actions USMS has taken to reduce its prisoner operations costs, we reviewed USMS’s congressional budget justifications and interviewed officials to compile a list of prisoner-related actions that had monetized cost savings for fiscal years 2010 through 2015. We limited our analysis to those identified initiatives or actions that had any monetized cost savings associated with them for fiscal years 2010 through 2015. We chose this time period to align our review of USMS’s costs savings efforts with our review of USMS’s prisoners operations cost trends for those 6 years. To determine the extent to which USMS’s estimated savings are reliable, we analyzed USMS documents and data, where available, such as documentation of the methodology and resulting dollar figures from each initiative’s savings estimate, and housing costs data. We compared each of USMS’s cost savings estimates against guidance for developing and documenting reliable cost savings estimates, including the Office of Management and Budget’s Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs, Standards for Internal Control in the Federal Government, and GAO’s guidance for Assessing the Reliability of Computer-Processed Data to determine the extent to which the estimates were sufficiently comprehensive, accurate, consistent, and documented. Additionally, for each well-documented savings estimate, we assessed whether major assumptions were reasonable by conducting or evaluating sensitivity analyses. We also reviewed estimates to ensure that reasonable assumptions were consistently and accurately applied. Additionally, we used USMS’s housing data on facilities’ per diem costs per prisoner and average daily population to determine the percentage of USMS prisoners housed in federal Bureau of Prisons’ (BOP) facilities for fiscal years 2010 through 2015—for which USMS does not pay—and to monetize any potential cost savings resulting from USMS housing some of its prisoners in BOP facilities. To monetize potential costs savings, we developed two estimates: (1) an estimate of USMS’s potential costs avoided by using BOP facilities for fiscal years 2010 through 2015; and (2) an estimate of the potential cost savings to the Department of Justice’s (DOJ)—of which both USMS and BOP are component agencies—in fiscal year 2015 due to USMS housing its prisoners in BOP facilities versus in potentially more costly non-federal facilities. To estimate the costs avoided by USMS, we used BOP- identified daily per capita costs for housing prisoners to calculate USMS’s potential costs avoided by housing its prisoners in BOP facilities from fiscal years 2010 through 2015. To develop this cost avoidance estimate, we assumed that USMS would pay BOP to house USMS prisoners at BOP’s daily per capita cost. These daily per capita costs are determined and published by BOP for each type of federal facility on an annual basis. We classified, confirmed, and applied the respective BOP daily per capita rates to each BOP facility in which USMS prisoners were housed. Then, we multiplied the total number of prisoners USMS housed in these BOP facilities by the appropriate daily per capita cost, using USMS’s data on its average daily population for these facilities. Finally, we summed total costs for each facility for fiscal years 2010 through 2015 to determine the total costs avoided for each of these fiscal years. To estimate the potential cost savings to DOJ, we used USMS’s fiscal year 2015 prisoner population and costs data and BOP’s per capita costs to compare the costs of housing USMS’s prisoners in BOP facilities versus the costs to house those prisoners in private or state and local facilities. We made the following assumptions: (1) if USMS were to pay for housing its prisoners at BOP facilities, USMS’ rate would be the daily per capita cost per prisoner, as published by BOP, (2) the existing private or state and local facilities would meet the demand of housing USMS prisoners housed in BOP facilities if USMS did not have the use of BOP facilities, and (3) the current private or state and local facilities would meet USMS’s housing demand at the same costs per day as they did in fiscal year 2015. To develop the estimate, we first identified the USMS districts that primarily used BOP facilities in fiscal year 2015. These 22 districts housed at least 2.5 percent of their total average daily population in BOP facilities. We then determined the costs these districts would have paid to BOP to house their prisoners, using BOP facilities’ total daily per capita rates and the districts average daily population for each BOP facility. We then determined the difference in costs (i.e., potential cost savings) between these districts housing their prisoners in BOP facilities and these districts housing those same prisoners in private or state and local facilities. Because USMS has the potential to use either private facilities or state and local facilities to house its prisoners, we developed two estimates to compare with the costs of housing prisoners in BOP facilities—one assuming prisoners were housed in private facilities, and one assuming prisoners were housed in state and local facilities. To determine the costs to house the prisoners in private facilities, we identified the private facilities that each district used in fiscal year 2015, determined the private facilities’ effective per diem costs, and derived a weighted private facility per diem cost for each district. The weighted per diem cost took into account USMS’s average daily population for each facility compared to USMS’s total average daily population for private facilities in each district. We applied weighted per diem costs for each district to the average daily population the districts placed in BOP facilities. We then summed the estimated total costs for using BOP facilities and the estimated total costs of housing these prisoners in private facilities for these districts, and compared the two to determine cost savings, if any. We repeated the above methodology for state and local facilities. Further, we interviewed agency officials to corroborate initiatives we had identified as well as to identify any other unreported cost savings actions. We also interviewed officials who estimated the savings to explain the methodologies, clarify any discrepancies, and provide any additional information in support of the estimates. To determine the extent to which USMS has systems designed to identify additional opportunities to save costs, we reviewed the processes and tools at USMS from fiscal years 2010 through 2015 that identify, implement, and promote cost-efficiency and savings initiatives throughout its institutions, such as USMS’s use of score cards to determine district utilization of private and federal facilities, and the agency’s strategic plan. We also spoke with USMS officials to discuss how its districts implement USMS policy directive guidance, and in what instances the districts may deviate from the stated guidance, as well as USMS’s oversight of district adherence to and deviation from internal policy guidance. We chose this time period to align with our review of USMS’s prisoner operations cost trends for those 6 years. With respect to identifying additional opportunities to realize cost efficiencies or reduce costs, using our financial analysis as context, we analyzed elements of USMS’s internal control system related to the control objective of achieving operational efficiencies and interviewed relevant officials to assess whether USMS has designed a management structure and processes to routinely assess its administrative and operational activities for possible corrective action. We did not independently test USMS’s internal controls to determine whether they mitigate all possible risks and are operating as intended. Specifically, we reviewed USMS’s mechanisms and processes leading to its internal review of operational and administrative functions, including its process for taking corrective action related to high-cost areas, such as procurement and human resources, and compared these characteristics with those called for in Standards for Internal Control in the Federal Government and in implementing the guidance in the OMB Circular No. A-123, Management’s Responsibility for Internal Control, which defines management’s responsibility for internal control in federal agencies. We interviewed relevant officials to discuss current actions USMS internal control officials have taken and are taking to include processes to identify and implement corrective actions in high cost areas, and agency oversight of these actions. We conducted this performance audit from March 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 provides the average daily detention population (ADP) and the respective housing, medical, and transportation costs for each district in fiscal year 2015. Additionally, the table provides housing costs associated with prisoners under the custody of the United States Marshals Service (USMS) Justice Prisoner and Alien Transportation System (JPATS). Housing costs include costs both paid for by the districts as well as costs for private facility usage, where USMS headquarters administers payment. Medical costs include national managed care contract payments for health services attributed to the districts, additional district payments for health services, as well as payments for guard and transportation support by district. A portion of medical payments are not attributable to the districts and appear as a USMS headquarters cost. Transportation costs include in-district support for services to move prisoners from state, local, and private facilities as well from JPATS. Further, the transportation costs associated with JPATS are for nationwide air carrier costs that could not be attributed to the districts. From fiscal years 2010 through 2015, the United States Marshals Service (USMS) expended at least $1.40 billion annually on prisoner housing, medical care, and transportation. The information that follows describes trends in these cost areas from fiscal years 2010 through 2015. To house federal prisoners, USMS paid for bed space through intergovernmental agreements (IGA) with state and locally owned prisoner facilities or through direct contracts with private facilities. Trends in facility usage and costs show that while the majority of USMS prisoners are housed in IGA facilities each year, the agency has used fewer IGA facilities as the overall average daily population (ADP) has fallen, and the percentage of ADP held in IGA facilities has dropped slightly over the 6- year timeframe. As a result, the nominal cost for using IGA facilities has decreased by over $80 million from fiscal year 2010 to 2015. See table 7. USMS’s costs of using private facilities from fiscal years 2010 through 2015 generally has followed the trend of increasing and decreasing with ADP. USMS officials stated that once the agency has developed contract terms with guaranteed minimum bed spaces and costs, the agency has an incentive to ensure prisoner populations in private facilities do not fall below the guaranteed daily bed space. Otherwise, USMS would be paying for bed space it has not filled. As a result, while ADP has dropped in private facilities, the percentage of annual ADP in private facilities has slightly increased. Further, nominal costs have increased by about $50 million from fiscal year 2010 to 2015. See table 8. USMS medical costs generally comprise the second largest cost driver for USMS prisoner costs, although medical costs were less than 10 percent of total prisoner costs for each fiscal year from 2010 through 2015. According to USMS officials, USMS has historically managed medical care through the districts, though USMS began a nationally managed program to better control costs beginning in fiscal year 2013. USMS medical costs generally fall into three categories: (1) health care services, such as payments to health care providers, and for supplies and equipment; (2) USMS medical program costs including system-wide costs such as USMS-employed practitioner review of medical records and nationally-managed contracts; and (3) transportation and guard services for medical care requiring outside services. However, beginning in fiscal year 2013, USMS’s Prisoner Operations Division (POD) initiated a nationally managed care contract to pay for districts’ health services. By fiscal year 2014, USMS was paying for a substantial portion of the districts’ health care services through the nationwide contract. As a result, costs shifted from districts paying for individual health services to POD paying for most medical costs through a nationally managed contract as a medical program expenditure. However, medical guard and transportation costs are still paid by the districts and not through a nationally managed program. As illustrated in figure 9, while total medical costs have grown from about $88 million to about $115 million, costs for individual district-managed health services have decreased and transportation and guard service costs have remained relatively the same, increasing slightly from almost $20 million to about $22 million over the 6-year time period. Transportation costs include all support costs related to moving prisoners between prison facilities or for court appearances and other court ordered movements. Such costs include the cost for moving prisoners as well as the labor costs associated with guarding and securing prisoners during movement if a guard is not provided by district officials. Transportation support costs generally comprise about 5 percent or less of total USMS prisoner costs. Transportation support costs fall into two broad categories: (1) in-district support for movements occurring within or otherwise managed by district U.S. Marshals and (2) support provided by the USMS Justice Prisoner and Alien Transportation System (JPATS) for prisoner movements of more than 50 miles outside the originating district. JPATS is a separate division of USMS that conducts major prisoner movements for both USMS prisoners and for BOP inmates. JPATS can move prisoners through both ground and air services and owns and leases a number of aircraft for its prisoner movements. District U.S. Marshals are responsible for managing and paying for in-district transportation support, which generally constitute prison officers from the facilities in which prisoners are housed. For JPATS support, POD has a reimbursable agreement in place with JPATS to reimburse the division for its transportation and labor costs. As illustrated in figure 10, the majority of transportation costs are costs associated with JPATS air travel each year. Specifically, air travel costs comprised between 52 and 61 percent of total transportation support costs, with in-district support comprising the second largest category, between 30 and 38 percent of annual transportation costs. As discussed above, USMS uses the assistance of state and local officers and contracted private guards to supplement deputy U.S. Marshals to, for example, facilitate prisoner movements within a district and to provide guard services for medical procedures. U.S. Marshals contract with IGA facilities or private facilities to move prisoners with the facility-provided guards. In addition, districts may employ sworn officers on an individual basis to conduct these activities. These officers are referred to as district security officers. Further, JPATS employs state and local officers and contract guards to augment its force when conducting ground movements. Guard costs are captured as part of the medical cost guard and transportation costs in figure 9, and the in-district and JPATS support costs in figure 10. However, given that guard costs are reported separately for medical and transportation costs above, table 9 provides a breakout of guard costs by district for each of the three major types of guard forces, as well as total guard costs, for fiscal year 2015. In addition to the contact named above, Jill Verret (Assistant Director), Pedro Almoguera, Willie Commons, III, Tonnye’ Conner-White, Dominick Dale, Kathleen Donovan, Jamarla Edwards, Eric Hauswirth, Scott Hiromoto, Jeremy Manion, Amanda Miller, John Mingus, Caroline Neidhold, Wade Tanner, and Michael Tropauer made key contributions to this report.
The Department of Justice's (DOJ) USMS is responsible for managing more than 50,000 federal prisoners during criminal proceedings until their acquittal or their conviction and transfer to the Federal Bureau of Prisons to serve their sentence. USMS provides housing, clothing, food, transportation, and medical care. The USMS does not own or manage any of its own facilities and instead relies on a combination of federal, state, local, and privately-managed facilities to house and care for these prisoners. Senate Report 113-78 of the Continuing Appropriations Act of 2014 included a provision for GAO to assess the costs of housing federal inmates and detainees. This report (1) identifies the primary costs associated with USMS prisoner operations, and the trends in spending from fiscal years 2010 through 2015; (2) assesses recent actions USMS has taken to reduce its prisoner operations costs and how much has been saved; and (3) determines systems USMS has to identify additional opportunities to save costs. GAO analyzed USMS's financial and operational data related to its prisoner operations costs from fiscal year 2010 through 2015, analyzed USMS documentation, and interviewed USMS officials. From fiscal years 2010 through 2015, the U.S. Marshals Service's (USMS) largest prisoner costs were housing payments to state, local, and private prisons. For example, in fiscal year 2015 USMS spent 86 percent of its $1.4 billion in prisoner operation costs on housing. While total prisoner costs and prisoner populations decreased since fiscal year 2012, per prisoner costs increased. USMS officials attributed the increase in part to lower than expected prisoner populations, resulting in USMS not filling guaranteed bed space at certain facilities. Also, prisoner costs generally were higher in districts with larger populations and limited use of federal facilities, for which USMS does not pay. Both population and costs were highest in 5 districts along the southwest border (see figure). USMS has implemented actions that it reports have continued to save prisoner-related costs from fiscal years 2010 through 2015, such as the alternatives to pre-trial detention program to reduce prisoners in USMS's custody. However, for actions with identified savings over this time period, GAO found that about $654 million of USMS's estimated $858 million in total savings is not reliable. For example, USMS identified $375 million in savings from the alternatives to pre-trial detention program for fiscal years 2010 through 2015, but did not verify the data or methodology used to develop the estimate or provide documentation supporting its reported savings for fiscal years 2012 onward. By developing reliable methods for estimating costs and validating savings, USMS would be better positioned to assess the effectiveness of its cost savings efforts. USMS has designed systems to identify opportunities for cost efficiencies, including savings. For example, the agency requires districts to conduct annual self-assessments of their procedures to identify any deficiencies which could lead to cost savings. However, USMS cannot aggregate and analyze the results of the assessments across districts. Developing a mechanism to do so would better position USMS to identify deficiencies or develop corrective actions that could result in additional cost savings opportunities. GAO recommends that USMS develop reliable methods for estimating cost savings and validating reported savings achieved, and establish a mechanism to aggregate and analyze the results of annual district self- assessments. USMS concurred with the recommendations.
FFMIA is part of a series of management reform legislation passed by Congress over the past two decades. This series of legislation started with the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), which Congress passed to strengthen internal controls and accounting systems throughout the federal government, among other purposes. Issued pursuant to FMFIA, the Comptroller General’s Standards for Internal Control in the Federal Government provides the standards that are directed at helping agency managers implement effective internal control, an integral part of improving financial management systems. Internal control is a major part of managing an organization and comprises the plans, methods, and procedures used to meet missions, goals, and objectives. In summary, internal control, which under OMB’s guidance for FMFIA is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. Effective internal control also helps in managing change to cope with shifting environments and evolving demands and priorities. As programs change and agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal control to ensure that the control activities being used are effective and updated when necessary. While agencies had achieved some success in identifying and correcting material internal control and accounting system weaknesses, their efforts to implement FMFIA had not produced the results intended by Congress. Therefore, in the 1990s, Congress passed additional management reform legislation to improve the general and financial management of the federal government. The combination of reforms ushered in by the (1) CFO Act of 1990, (2) Government Performance and Results Act of 1993, (3) Government Management Reform Act of 1994, (4) FFMIA, (5) Clinger- Cohen Act of 1996, (6) Accountability of Tax Dollars Act of 2002, and (7) Department of Homeland Security Financial Accountability Act of 2004, if successfully implemented, provides a solid foundation for improving the accountability of government programs and operations as well as for routinely producing valuable cost and operating performance information. These financial management reform acts emphasize the importance of improving financial management across the federal government. In particular, building on the foundation laid by the CFO Act, FFMIA emphasizes the need for agencies to have systems that are able to generate reliable, useful, and timely information for decision-making purposes and to ensure accountability on an ongoing basis. FFMIA requires the departments and agencies covered by the CFO Act to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. FFMIA also requires auditors to state in their CFO Act financial statement audit reports whether the agencies’ financial management systems substantially comply with FFMIA’s systems requirements. Appendixes I through IV include details on the various requirements and standards that support federal financial management. OMB establishes governmentwide financial management policies and requirements and has issued two sources of guidance related to FFMIA reporting. First, OMB Bulletin No. 01-02, Audit Requirements for Federal Financial Statements, dated October 16, 2000, prescribes specific language auditors should use when reporting on an agency system’s substantial compliance with FFMIA. Specifically, this guidance calls for auditors to provide negative assurance when reporting on an agency system’s FFMIA compliance. Second, in OMB Memorandum, Revised Implementation Guidance for the Federal Financial Management Improvement Act (Jan. 4, 2001), OMB provides guidance for agencies and auditors to use in assessing substantial compliance. The guidance describes the factors that should be considered in determining whether an agency’s systems substantially comply with FFMIA’s requirements. Further, the guidance provides examples of the types of indicators that should be used as a basis for assessing whether an agency’s systems are in substantial compliance with each of the three FFMIA requirements. Finally, the guidance discusses the corrective action plans, to be developed by agency heads, for bringing their systems into compliance with FFMIA. We have worked in partnership with representatives from the President’s Council on Integrity and Efficiency (PCIE) to develop and maintain the joint GAO/PCIE Financial Audit Manual (FAM). The FAM provides specific procedures auditors should perform when assessing FFMIA compliance. As detailed in appendix V, we have also issued a series of checklists to help assess whether agencies systems meet systems requirements. The FAM guidance on FFMIA assessments recognizes that while financial statement audits offer some assurance regarding FFMIA compliance, auditors should design and implement additional testing to satisfy FFMIA criteria. For example, in performing financial statement audits, auditors generally focus on the ability of the financial management systems to process and summarize financial information that flows into annual agency financial statements. In contrast, FFMIA requires auditors to assess whether an agency’s financial management systems comply with system requirements, accounting standards, and the SGL. To do this, auditors need to consider whether agency systems provide reliable, useful, and timely information for managing day-to-day operations so that agency managers would have the necessary information to measure performance on an ongoing basis rather than just at year-end. Further, OMB’s current audit guidance calls for financial statement auditors to review performance information for consistency with the financial statements, but does not require auditors to determine whether such information is available to managers for day-to-day decision making as called for by the FAM guidance for testing compliance with FFMIA. We reviewed the fiscal year 2004 financial statement audit reports for the 23 CFO Act agencies to identify the auditors’ assessments of agency financial systems’ compliance and the problems that affect FFMIA compliance. We also reviewed the fiscal year 2004 financial statement audit report for DHS to identify any FFMIA-related issues. Prior experience with the auditors and our review of their reports provided the basis for determining the sufficiency and relevancy of evidence provided in these documents. Based on the audit reports, we identified problems reported by the auditors that affect agency systems’ compliance with FFMIA. The problems identified in these reports are consistent with long-standing financial management weaknesses we have reported based on our work at a number of agencies. However, we caution that the occurrence of problems in a particular category may be even greater than auditors’ reports of FFMIA noncompliance would suggest, because auditors may not have included all problems in their reports. Finally, we held discussions with OMB officials to obtain information about their current efforts to improve federal financial management and address our prior recommendations related to FFMIA. We conducted our work in the Washington, D.C. area from February 2005 through May 2005 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB or his designee. We received written comments from the OMB Controller. OMB’s comments are discussed in the Agency Comments and Our Evaluation section and reprinted in appendix VI. While agencies have made demonstrable progress in producing auditable financial statements and some progress in addressing their financial management systems weaknesses, the majority of agencies’ systems are still not substantially compliant with FFMIA’s requirements. Figure 2 summarizes auditors’ assessments of FFMIA compliance for fiscal years 2000 through 2004 and suggests that the instances of noncompliance with FFMIA’s three requirements have remained fairly constant. For fiscal year 2004, offices of inspectors general (OIG) and their contract auditors reported that the systems of 16 of the 23 CFO Act agencies did not substantially comply with at least one of FFMIA’s three requirements— federal financial management systems requirements, applicable federal accounting standards, or the SGL at the transaction level. In fiscal year 2004, auditors for six agencies—the Department of Commerce (Commerce), the Department of Energy (Energy), the Environmental Protection Agency (EPA), the General Services Administration (GSA), the National Science Foundation (NSF), and the Social Security Administration (SSA)—provided negative assurance that the agencies’ financial systems were in compliance with FFMIA. In addition, for the first time, auditors for one agency, DOL, provided positive assurance that its systems were in compliance with FFMIA. In contrast, in fiscal year 2003, the auditors for Commerce, Energy, EPA, the Nuclear Regulatory Commission (NRC), NSF, and SSA reported that the results of their tests disclosed no instance in which their financial management systems did not meet FFMIA requirements. At the NRC, the auditors determined that the financial management systems did not comply with the requirements of FFMIA in fiscal year 2004, although they had previously determined that the systems were in compliance in fiscal year 2003. The change was due to audit tests performed on NRC’s fee billing system in fiscal year 2004. As a result of their tests, the auditors concluded that the billing system lacked sound internal controls and does not comply with existing requirements for revenue systems. At GSA, the auditors provided negative assurance that the financial management systems were FFMIA-compliant in fiscal year 2004, although they had previously determined that the systems were not in compliance for fiscal year 2003 due to significant reconciliation problems. For fiscal year 2004, the GSA auditors closed the material weakness regarding the significant reconciliation problems that had affected GSA the prior year. DHS will be subject to FFMIA for the first time in fiscal year 2005. The Department of Homeland Security Financial Accountability Act added DHS to the list of CFO Act agencies effective for fiscal year 2005. Because DHS was not subject to FFMIA in fiscal year 2004, we have not included DHS in our summaries of compliance with FFMIA and problems reported by the auditors for fiscal year 2004. However, we have noted that the DHS auditors identified and reported deficiencies that relate to all three FFMIA requirements. We plan to include DHS in our analysis of the fiscal year 2005 FFMIA results. While substantially more CFO Act agencies have obtained clean or unqualified audit opinions on their financial statements, as shown in figure 2, the underlying agency financial systems remain a serious problem. The number of unqualified opinions has increased over the past 8 years (from 11 in fiscal year 1997 to 18 for fiscal year 2004), and most agencies were able to issue their audited financial statements within the accelerated reporting time frame—22 of the 23 CFO Act agencies issued their audited financial statements by the November 15, 2004, deadline set by OMB, just 46 days after the close of the fiscal year. While the increase in unqualified and timely opinions is noteworthy, we are concerned over the growing number of CFO Act agencies that have restated certain of their financial statements for fiscal year 2003 to correct errors and have included a matter of emphasis paragraph in our report on the audit of the fiscal year 2004 consolidated financial statements given the seriousness of this problem. As we have previously testified, at least 11 of the 23 CFO Act agencies restated their fiscal year 2003 financial statements, whereas 5 CFO Act agencies restated their fiscal year 2002 financial statements. The restatements to CFO Act agencies’ fiscal year 2003 financial statements ranged from correcting two line items on one agency’s balance sheet to numerous line items on several of another agency’s financial statements. The amounts of the agencies’ restatements ranged from several million dollars to over $91 billion. Nine of those 11 agencies received unqualified opinions on their financial statements originally issued in fiscal year 2003. Seven of the 9 auditors issued unqualified opinions on the restated financial statements, which in substance replace the auditors’ opinions on their respective agencies’ original fiscal year 2003 financial statements. For 2 of these 9 agencies, the auditors not only withdrew their unqualified opinions on the fiscal year 2003 financial statements but also issued other than unqualified opinions on their respective agencies’ restated fiscal year 2003 financial statements because they could not determine whether there were any additional misstatements and the effect that these could have on the restated fiscal year 2003 financial statements. For two of the agencies with restated financial statements, auditors provided negative assurance that the agencies’ systems were in compliance with FFMIA for fiscal year 2003. The restatements at these agencies reflected inaccurate recording of transactions, and in one case, the cause for the restatement could be traced back to the implementation of new software, among other factors. The necessity for these agencies to restate certain financial accounts in the subsequent fiscal year raises questions about whether the agencies systems substantially met FFMIA requirements and whether financial managers had access to reliable, useful, and timely information with which to make fully informed operational decisions in fiscal year 2003. The need for restatements and end-of-the-year adjustments to correct for errors undermines public trust and confidence in both the entity and all responsible parties and indicates a continuing lack of improvement in the underlying agency financial systems. Undue emphasis on receiving unqualified or clean audit opinions has led to an expectation gap since, as more agencies receive clean opinions, public expectations are raised that the government has sound financial management and can produce reliable, useful, and timely information on demand throughout the year, whereas the annual FFMIA assessments offer a different perspective. In fiscal year 2004 auditors for seven agencies reported their systems to be in substantial compliance with the requirements of FFMIA. Auditors for six of these agencies (Commerce, Energy, EPA, NRC, NSF, and SSA) provided negative assurance that the agencies’ systems were in compliance with FFMIA. Auditors provide negative assurance when they state that nothing came to their attention during the course of their planned procedures to indicate that these agencies’ financial management systems did not meet FFMIA requirements. If readers are not familiar with the concept of negative assurance, which we believe is generally the case, they may incorrectly assume that these systems have been fully tested by the auditors and that the agencies have achieved compliance. OMB’s current audit guidance only calls for auditors to provide negative assurance when reporting whether an agency’s systems are in substantial compliance with FFMIA. To provide positive assurance of FFMIA compliance, auditors need to perform more comprehensive audit procedures than those necessary to render an opinion for a financial statement audit. In performing financial statement audits, auditors generally focus on the capability of the financial management systems to process and summarize financial information that flows into financial statements. In contrast, FFMIA is much broader and requires auditors to assess whether an agency’s financial management systems substantially comply with systems requirements. To do this, auditors need to consider whether agency systems provide complete, accurate, and timely information for day-to-day decision making and management. In fiscal year 2004, auditors for DOL provided an opinion, or positive assurance, of DOL’s financial management systems’ compliance with FFMIA. At DOL, the Inspector General (IG) contracted with an independent public accounting firm to perform the FFMIA examination in accordance with American Institute of Certified Public Accountants attestation standards, which by reference are incorporated in Government Auditing Standards. To do so, the auditors used a combination of financial statement and FFMIA-specific audit procedures. Specifically, they performed extensive transaction testing and reconciliations combined with FFMIA-related audit procedures based on the GAO/PCIE FAM requirements. For example, they developed a good understanding of the financial systems capabilities, documented their assessments of DOL’s financial systems’ compliance with systems requirements, and considered the nature and extent of managerial cost information available for effective day-to-day management. According to the auditors, two developments at DOL in fiscal year 2004 were key to the ability of the auditors to conclude that DOL systems substantially complied with the three requirements of FFMIA for fiscal year 2004. First, during fiscal year 2004, DOL management assigned staff the responsibility of reconciling the Fund Balance with Treasury (FBWT) accounts on a daily basis. Due to this increased focus on FBWT, at the end of fiscal year 2004, the auditors found no material differences between DOL and Treasury’s records. Second, DOL implemented a cost management system during fiscal year 2004 in order to provide current year cost data to managers that has not been available in prior years. The auditors also determined that none of the internal control deficiencies reported as part of the financial statement audit indicated substantial noncompliance with FFMIA requirements. For the fiscal year 2005 financial statement audit, the auditors plan to increase their focus on the types of reports being produced currently by the cost system and how managers are using that information for day-to-day operations. In addition, the fiscal year 2005 DOL audit plan requires the auditors to perform the FFMIA-related FAM audit procedures and complete the associated checklists. The efforts by the DOL auditors to perform the level of review necessary to provide positive assurance of FFMIA compliance in fiscal year 2004 is most noteworthy. We have discussed the importance of providing positive assurance on FFMIA as required by the act for a number of years. We look forward to other agencies adopting similar auditing and reporting practice. Providing positive assurance of an agency’s financial management system can identify weaknesses and lead to improvements that enhance the performance, productivity, and efficiency of federal financial management systems. It also provides a clear “bottom line,” whereas negative assurance does not do so. Therefore, as we have discussed in prior reports, we reaffirm our prior recommendation that OMB require agency auditors to provide a statement of positive assurance when reporting an agency’s systems to be in substantial compliance with FFMIA. OMB continues to support the requirement for negative assurance of FFMIA compliance. While OMB agrees that testing should occur, and its guidance on FFMIA calls for it, OMB officials stated that different, more coordinated approaches toward assessing an agency’s internal controls and FFMIA compliance might provide sufficient assurance on an agency’s systems. For example, in preparing the President’s Management Agenda (PMA) scorecard assessments, OMB officials meet with agencies to discuss a number of financial management issues and have systems demonstrations. Agencies are asked to identify key business questions and related cost drivers. Then, the agencies must develop systems that can produce the information needed on those cost drivers to help management at all levels focus on results. OMB officials stated that they believed the PMA scorecard framework offers an alternate route toward substantial compliance that is similar to that offered by positive assurance. In its written comments on a draft of this report (see app. VI), OMB stated that the processes used in evaluating agencies against the PMA standards can provide a corroborative mechanism in evaluating compliance with FFMIA. Our concern is that the information provided by this approach does not come under audit scrutiny, which is what the law requires, and may not be reliable. In December 2004, OMB revised Circular No. A-123, Management’s Responsibility for Internal Control, to strengthen the requirements for conducting management’s assessment of internal control over financial reporting. The revision incorporates the internal control requirements for publicly traded companies that are contained in the Sarbanes-Oxley Act of 2002. The circular emphasized management’s responsibility for establishing, maintaining, and reporting on internal control to achieve the objectives of effective and efficient operations, reliable financial reporting, and compliance with laws and regulations. In commenting on a draft of this report, OMB emphasized that through its revision to Circular No. A-123, agencies are required to implement more rigorous processes for conducting management’s assessment of the effectiveness of internal controls over financial reporting. Given that PMA and Circular No. A-123 reviews help to ensure agencies’ access to and use of timely and accurate financial data, OMB believes that requiring a statement of positive assurance would prove only marginally useful. From our perspective, auditor reporting on internal control is a critical component of monitoring the effectiveness of an organization’s accountability, especially for large, complex, or challenged entities. Auditors can better serve their clients and other financial statement users and better protect the public interest by having a greater role in providing assurances of the effectiveness of internal control in deterring fraudulent financial reporting and protecting assets. Financial management systems are a critical element of an entity’s internal control over financial reporting. Although enhanced internal control reporting would not necessarily address the full capability of the financial management systems in place, such reporting would include reportable internal control weaknesses caused by financial systems problems. However, the full value of independent auditors’ assessments of FFMIA compliance will not be fully realized until auditors perform a sufficient level of audit work to be able to provide positive assurance that agencies are in compliance with FFMIA as called for in the act. When reporting an agency’s financial management systems to be in substantial compliance, positive assurance from independent auditors will provide users with confidence that the agency systems provide the reliable, useful, and timely information envisioned by the act. In addition, we also reaffirm our previous recommendation that OMB explore clarifying the definition of “substantial compliance” to help ensure consistent application of the term. As we noted in our prior reports, auditors we interviewed had concerns about providing positive assurance in reporting on agency systems’ FFMIA compliance because of a need for clarification regarding the meaning of substantial compliance. In its comments, OMB stated that its growing experience helping agencies implement the PMA enables it to refine the existing FFMIA indicators associated with substantial compliance. Accordingly, OMB said it would consider our recommendation in any future policy and guidance updates. Based on our review of the fiscal year 2004 audit reports for the 16 agencies reported to have systems not in substantial compliance with one or more of FFMIA’s three requirements, we identified six primary reasons cited by the auditors for agency systems not being compliant. The weaknesses reported by the auditors ranged from serious, pervasive systems problems to less serious problems that may affect only one aspect of an agency’s accounting operation: nonintegrated financial management systems, lack of accurate and timely recording of financial information, noncompliance with the SGL, lack of adherence to federal accounting standards, and weak security controls over information systems. Figure 3 shows the relative frequency of these problems at the 16 agencies reported to have noncompliant systems. The same six types of problems have been cited by auditors in their fiscal years 2000 through 2003 audit reports, although the auditors may not have reported these problems as specific reasons for their systems’ lack of substantial compliance with FFMIA. In addition, we caution that the occurrence of problems in any particular category may be even greater than auditors’ reports of FFMIA noncompliance would suggest because auditors may not have identified all problems in their reviews. The CFO Act calls for agencies to develop and maintain integrated accounting and financial management systems that comply with federal systems requirements and provide for (1) complete, reliable, consistent, and timely information that is responsive to the financial information needs of the agency and facilitates the systematic measurement of performance; (2) the development and reporting of cost management information; and (3) the integration of accounting, budgeting, and program information. OMB Circular No. A-127, Financial Management Systems, requires agencies to establish and maintain a single integrated financial management system that conforms to functional requirements published by JFMIP’s Program Management Office (PMO). More details on the financial management systems requirements can be found in appendixes I and II. The lack of integrated financial management systems typically results in agencies expending major effort and resources, including in some cases hiring external consultants, to develop information that their systems should be able to provide on a daily or recurring basis. Agencies with nonintegrated financial systems are also more likely to devote more time and resources to collecting information than those with integrated systems. In addition, opportunities for errors are increased when agencies’ systems are not integrated. Auditors frequently mentioned the lack of integrated financial management systems in their fiscal year 2004 audit reports. As shown in figure 3, auditors for 12 of the 16 agencies with noncompliant systems reported this to be a problem, compared with 11 of the 17 agencies reported with noncompliant systems in fiscal year 2003. For example, auditors for the Department of Justice reported that the financial management systems of the department’s component agencies are not integrated or configured to support financial management and reporting. For instance, the U.S. Marshals Service’s core financial system lacks integrated subsidiary ledgers for certain key account balances. Consequently, staff at this organization must perform time-consuming manual procedures to document adjustments and crosswalks between the general ledger and the financial statements. However, the limited amount of time available at the end of each financial reporting period increases the risk that errors existing in the financial statements were not detected and corrected prior to final issuance. The auditors also noted that the nonintegrated systems do not support management’s need for timely and accurate information for day-to- day decision making. At the National Aeronautics and Space Administration (NASA), auditors reported numerous weaknesses in the core financial system, the integrated financial management system first implemented by NASA in fiscal year 2003. We have previously reported on problems NASA faced when implementing this system. Specifically, the auditors found that the core financial system lacked integration with certain key subsidiary systems, such as the property system, and does not facilitate the preparation of financial statements. Although the auditors recognized that management identified and resolved significant system problems in fiscal year 2004, the auditors identified serious continuing weaknesses in their review of property, plant, and equipment (PP&E)—specifically contractor-held PP&E. For example, due to a lack of integration with the property system, entries for contractor-held property, totaling $8.5 billion, had to be manually entered into the core financial system. The auditors concluded that the problems will not be fully addressed until NASA implements a single integrated system for reporting property and develops a methodology to identify costs that should be capitalized at the time that transactions are processed. The auditors further noted that certain transactions continue to be posted incorrectly due to improper configurations within the system. Consequently, they concluded that NASA lacks an integrated financial management system that provides effective and efficient interrelationships between software, hardware, personnel, procedures, controls, and data. A reconciliation process, whether manual or automated, is a necessary and valuable part of a sound financial management system. The less integrated the financial management system, the greater the need for adequate reconciliations because data are being accumulated from a number of different sources. Reconciliations are needed to ensure that data have been recorded properly between the various systems and manual records. The Comptroller General’s Standards for Internal Control in the Federal Government highlights reconciliation as a key control activity. As shown in figure 3, auditors for 11 of the 16 agencies with noncompliant systems reported that the agencies had reconciliation problems, including difficulty in reconciling their FBWT accounts with Treasury’s records, compared with 11 of the 17 agencies reported with noncompliant systems in fiscal year 2003. Treasury policy requires agencies to reconcile their accounting records with Treasury records on a monthly basis (comparable to individuals reconciling their personal checkbooks to their monthly bank statements). For example, in fiscal year 2003, auditors for the Department of Transportation (DOT) reported that the department had not implemented effective processes to reconcile transactions with other federal agencies. During fiscal year 2004, DOT did improve its reconciliation procedures using a new reporting tool within its financial management system. However, on September 30, 2004, DOT still had not identified agencies associated with $27 billion, or about half, of the $55 billion of transactions with other federal agencies that were processed and reported to Treasury in fiscal year 2004. The large amount associated with unknown trading partners demonstrates that DOT still lacks an effective process for reconciling transactions. Furthermore, DOT lacked an effective process for reconciling transactions among its own subsidiary agencies. In fiscal year 2004, DOT’s subsidiary agencies reported a total of $17 million in accounts receivable, or amounts due from other departmental agencies. These same organizations, however, reported $582 million in accounts payable, or amounts owed to other departmental agencies. Because these amounts should reflect only transactions within DOT, at the consolidated agency level the amount due should match the amount owed. Due to this discrepancy, DOT management had to perform extensive research and make numerous manual adjustments to balance its records and prepare reliable financial statements. Until DOT is able to automatically track transactions with other federal agencies and between its own subsidiary agencies, it will not be able to make significant progress in reconciling its transaction balances internally and with those of other agencies. As a result of these problems at DOT and other federal agencies, the federal government’s ability to determine the impact of these differences on the amounts reported in the consolidated financial statements is impaired, which we cite in our audit report on the U.S. government’s consolidated financial statements as one of three major impediments to providing an opinion on those financial statements. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a commitment by federal agencies and strong leadership and oversight by OMB. As shown in figure 3, auditors for all 16 of the agencies with noncompliant systems reported the lack of accurate and timely recording of financial information as a problem for fiscal year 2004, compared with 15 of the 17 agencies reported with noncompliant systems in fiscal year 2003. Accurate and timely recording of financial information is essential for successful financial management. Timely recording of transactions facilitates accurate reporting in agencies’ financial reports and other management reports used to guide managerial decision making. In addition, having systems that record information in an accurate and timely manner is critical for key governmentwide initiatives, such as integrating budget and performance information. In contrast, untimely recording of transactions during the fiscal year can result in agencies making substantial efforts at fiscal year-end to perform extensive manual financial statement preparation efforts that are susceptible to error and increase the risk of misstatements. For example, auditors for the U.S. Department of Agriculture (USDA) reported that the department had to make about 1,800 closing adjustments, totaling billions of dollars, to the financial statements at year-end. The auditors noted that most of the adjustments they reviewed were necessary; however, having to make numerous adjustments at year-end diminished the utility of the financial data in assisting managers in administering USDA programs and operations throughout the year. In another case, the Department of Defense’s (DOD) auditors reported that the Defense Finance and Accounting Service in Indianapolis made $204.8 billion (excluding adjustments for intragovernmental transactions) in unsupported accounting entries to prepare the fiscal year 2004 Army General Fund financial statements. Since these adjustments were unsupported, it was difficult for auditors to assess the accuracy of the transactions and account balances, and was one of a number of financial statement material weaknesses that led DOD auditors to disclaim an opinion on DOD’s fiscal year 2004 financial statements. As shown in figure 3, auditors for 11 of the 16 agencies with noncompliant systems reported that the agencies’ systems did not comply with SGL requirements for fiscal year 2004, compared with 10 of the 17 agencies reported with noncompliant systems in fiscal year 2003. FFMIA specifically requires federal agencies to implement the SGL at the transaction level. Using the SGL promotes consistency in financial transaction processing and reporting by providing a uniform chart of accounts and pro forma transactions and provides a basis for comparison at the agency and governmentwide levels. The defined accounts and pro forma transactions standardize the accumulation of agency financial information as well as enhance financial control and support financial statement preparation and other external reporting. By not implementing the SGL, agencies may experience difficulties in providing consistent financial information across their components and functions. For example, auditors for the Department of Health and Human Services (HHS) found that approximately 1,550 nonstandard accounting entries with an absolute value of almost $30 billion were recorded in HHS’ Program Support Center’s CORE accounting system to compensate for noncompliance with the SGL. These nonstandard accounting entries were recorded to correct for misstatements, to record reclassifications, and to correct reported balances. The auditors noted that these amounts were significantly less than those in fiscal year 2003, when approximately 2,300 nonstandard accounting entries were recorded with an absolute value of about $41 billion. In another instance, auditors for the U.S. Agency for International Development (USAID) found that the agency’s overseas missions continue to use the Mission Accounting and Control System (MACS) as their primary financial system. MACS is a computer-based system that does not substantially comply with FFMIA’s SGL requirement since it lacks the SGL chart of accounts. Instead, the system uses transaction codes to record entries; therefore, USAID cannot ensure that transactions are posted properly and consistently from mission to mission. One of FFMIA’s requirements is that agencies’ financial management systems account for transactions in accordance with federal accounting standards; however, agencies continue to face significant challenges in implementing these standards. As shown in figure 3, auditors for 11 of the 16 agencies with noncompliant systems reported that these agencies had problems complying with one or more federal accounting standards for fiscal year 2004, compared with 11 of the 17 agencies reported with noncompliant systems in fiscal year 2003. Appendixes III and IV list the federal financial accounting standards and other guidance issued by the Federal Accounting Standards Advisory Board and its Accounting and Auditing Policy Committee, respectively. Auditors expressly reported compliance problems with 11 specific accounting standards in fiscal year 2004. Of those standards, the 4 that were most troublesome for agencies are Statement of Federal Financial Accounting Standards (SFFAS) No. 1, Accounting for Selected Assets and Liabilities; SFFAS No. 4, Managerial Cost Accounting Concepts and Standards; SFFAS No. 6, Accounting for Property, Plant, and Equipment; and SFFAS No. 7, Accounting for Revenue and Other Financing Sources. In particular, SFFAS No. 4, which became effective in 1998, continues to be difficult for federal managers to fully implement. For example, as auditor for the Department of the Treasury’s Internal Revenue Service (IRS), we reported that during fiscal year 2004 IRS continued to lack a cost accounting system capable of accurately and timely tracking and reporting the costs of its programs and projects. This condition also renders IRS unable to produce reliable cost-based performance information. IRS officials stated that they have the information necessary to determine the cost of various activities, such as conducting investigations; however, this information is widely distributed among a variety of information systems that are not integrated and therefore cannot share data. This makes the accumulation of cost information time consuming and labor intensive, and thus such information is not readily available for decision-making purposes. Accurate and timely cost management information is critical for federal managers to transform how government agencies manage the business of government and vital in developing meaningful links between budget, accounting, and performance. The requirement for managerial cost information has been in place since 1990 under the CFO Act and since October 1997 as a federal accounting standard. Similar system and process deficiencies also impede agency efforts to accurately and timely track and report the cost of their PP&E in accordance with SFFAS No. 6. For example, in its annual report on reliability, as required by section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD acknowledged material deficiencies that impede its ability to reliably report the cost and depreciation of its general PP&E. Specifically, DOD disclosed a lack of (1) supporting documentation for general PP&E purchased many years ago, (2) integrated acquisition and financial systems, and (3) systems designed to capture the acquisition and modification costs and calculate depreciation. The consequences of not complying with this accounting standard are also similar: management lacks accurate and timely information to adequately safeguard, account for, and control these assets. Information security weaknesses are a major concern for federal agencies and the general public and one of the frequently cited reasons for noncompliance with FFMIA. These control weaknesses place vast amounts of government assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Accordingly, we have considered information security to be a governmentwide high-risk area since 1997. As shown in figure 3, auditors for 15 of the 16 agencies with noncompliant systems reported security weaknesses in information systems to be a problem, compared with all 17 of the agencies reported with noncompliant systems in fiscal year 2003. Consistent with section 1008 of the National Defense Authorization Act for Fiscal Year 2002, which requires DOD to minimize the use of resources to develop, compile, report, and audit unreliable financial statements, DOD auditors relied upon management’s assertion regarding DOD’s lack of compliance with federal financial management systems requirements. Accordingly, the DOD auditors limited their audit work and did not report information security weaknesses in their disclaimer report on DOD’s fiscal year 2004 financial statements. However, DOD management reported that in addition to being unable to provide information that is reliable, timely, and accurate, the department’s information systems are potentially vulnerable to an information warfare attack and reported this issue as a “significant deficiency” under the reporting requirements of the Federal Information Security Management Act of 2002. The DOD auditors advised us that they agree with DOD management’s acknowledgement of information security weaknesses. Therefore, we have included DOD in the summary of agencies with information security weaknesses. In addition, most of the agencies whose auditors provided negative assurance of substantial compliance with FFMIA still have computer security issues that need to be addressed by agency management. Unresolved information security weaknesses could adversely affect the ability of agencies to produce accurate data for decision making and financial reporting because such weaknesses could compromise the reliability and availability of data that are recorded in or transmitted by an agency’s financial management system. As a case in point, in fiscal year 2004, auditors for the Department of Veterans Affairs reported that program and financial data continue to be at risk due to (1) the implementation and enforcement of controls and oversight over access to information systems, (2) the segregation of key duties and responsibilities of employees, and (3) contingency planning. They concluded that these weaknesses placed sensitive information, including financial data and sensitive veteran medical and benefit information, at risk of inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, possibly without detection. As agencies move forward with initiatives to address FFMIA-related problems, it is important that consideration be given to the numerous governmentwide initiatives under way to address long-standing financial management weaknesses. OMB continues to move forward on new initiatives to enhance financial management and provide results-oriented information in the federal government. Two notable developments in this area in fiscal year 2004 were the realignment of responsibilities formerly performed by JFMIP and its PMO and the development of financial management lines of business. Furthermore, the continuing leadership and support of Congress will be crucial to sustaining momentum in the reformation of financial management in the federal government. In a December 2004 memorandum, OMB announced a realignment of JFMIP’s responsibilities for financial management policy and oversight in the federal government. JFMIP was originally formed under the authority of the Budget and Accounting Procedures Act of 1950 and was a joint and cooperative undertaking of the General Accounting Office, the Department of the Treasury, OMB, and the Office of Personnel Management (OPM), working in cooperation with each other to improve financial management practices in the federal government. Leadership and program guidance was provided by the four principals of JFMIP—the Comptroller General of the United States, the Secretary of the Treasury, and the Directors of OMB and OPM. The PMO, managed by the Executive Director of JFMIP using funds provided by the CFO Council (CFOC), was established in 1999. The PMO was responsible for the testing and certification of commercial off-the-shelf (COTS) core financial systems for use by federal agencies and coordinating the development and publication of functional requirements for financial management systems. On December 1, 2004, in an effort to eliminate duplicative roles and streamline financial management improvement efforts, the four principals agreed to realign JFMIP’s responsibilities for financial management policy and oversight. Specifically, under the announced realignment, the PMO will now report to the chair of a new CFOC committee—the Financial Systems Integration Committee (FSIC). Other JFMIP functions, such as issuing systems requirements, were assumed by OMB’s Office of Federal Financial Management (OFFM) and the CFOC. While JFMIP ceased to exist as a separate organization, the Principals will continue to meet at their discretion consistent with the 1950 act. The newly established FSIC will be responsible for advising OFFM on systems requirements and overseeing the PMO, which will continue certifying core financial systems. The realignment recognizes that OMB and the agencies have responsibility for all facets of financial management systems and the work of the FSIC will be critical to the success of the realignment. In spring 2004, OMB launched task forces to conduct a governmentwide analysis of five lines of business supporting the President’s Management Agenda (PMA) goal to expand electronic government. The goal of the Line of Business (LOB) initiative is to develop business-driven, common solutions for five specific lines of business that extend across the entire federal government. The five lines of business are financial management, human resources management, grants, federal health architecture, and case management. These lines of business share similar business requirements and processes. In the spring of 2005, OMB added the Information Technology Security LOB task force. OMB and designated agency LOB task forces plan to use enterprise architecture-based principles and best practices to identify common solutions for business processes, technology-based shared services, or both to be made available to government agencies. Driven from a business perspective rather than a technology focus, the solutions are expected to address distinct business improvements to enhance government’s performance and services for citizens. The financial management LOB goals are to achieve or enhance process improvements and cost savings in the acquisition, development, implementation, and operation of financial management systems through shared services, joint procurements, consolidation, and other means; promote seamless data exchange between and among federal agencies; provide for the standardization of business processes and data elements; and strengthen internal controls through real-time integration of core financial and subsidiary systems. To achieve these goals, OMB and the associated agency task forces have focused on developing Centers of Excellence (COE). OMB officials stated that the purpose of developing COEs is to reduce the number of systems that each individual agency must support, promote standardization, and reduce the duplication of efforts. COEs can also create economies of scale by consolidating selected financial functions into a single agency or center. The economies of scale come from being able to use fewer staff to achieve the same results. For example, major software vendors often issue a software patch daily that must be tested and installed. A single COE would be able to test and update multiple agencies’ systems rather than having multiple agencies each performing the same update. Officials at OMB stated that the financial management LOB continues to evolve with four agencies being selected to become COEs through the fiscal year 2006 budgetary process. In addition, OMB led the development of a due diligence checklist to assess an agency’s capacity to be a COE. This checklist documents whether an agency can perform specific functions, has proper certification and accreditation, and uses a PMO-certified system. OMB has plans to develop additional tools and guidance to facilitate the COE concept. For example, OMB is considering a service level agreement template to provide agencies with standard contract clauses and will require agencies to document and justify the competition and selection process. OMB officials stated that the policies and procedures established through the financial management LOB will help keep federal financial management systems current; improve business processes; and with fewer systems in operation, facilitate vendor contracts. We have long supported and called for initiatives to standardize and streamline common financial systems, which may not only reduce costs but, if done correctly, can dramatically improve accountability. We have ongoing work to analyze the financial management LOB. As we have stated in our prior reports, DOD’s financial management and related business operations continue to cause substantial waste and inefficiency, have an adverse impact on mission performance, and result in the lack of adequate transparency and appropriate accountability across all major business areas. Of the 25 areas on GAO’s governmentwide high-risk list, 8 are DOD-specific program areas related to key business functions, and the department shares responsibility for 6 other high-risk areas that are governmentwide in scope. These problems preclude the department from producing reliable and timely data on its results of operations and accurately reporting on its trillions of dollars of assets and liabilities. Additionally, DOD’s stovepiped, duplicative, and nonintegrated systems environment contributes to these operational problems and costs the American taxpayers billions of dollars each year. For fiscal year 2005, the department requested approximately $13 billion to operate, maintain, and modernize its reported 4,150 business systems. Overhauling the financial management and business operations of one of the largest and most complex organizations in the world represents a daunting challenge. In an effort to better manage DOD’s resources, the Secretary of Defense has appropriately placed a high priority on transforming key business processes to improve their efficiency and effectiveness in supporting the department’s military mission. The Business Management Modernization Program is the department’s business transformation initiative; it encompasses defense policies, processes, people, and systems that guide, perform, or support all aspects of business management—including development and implementation of the business enterprise architecture or modernization blueprint. The Secretary of Defense has estimated that improving business operations of the department could save 5 percent of DOD’s annual budget, which equates to a savings of over $20 billion a year. Transformation of DOD’s business systems and operations is critical to the department having the ability to provide Congress and DOD management with accurate and timely information for use in the decision-making process. Although the Secretary of Defense and several key agency officials have shown commitment to transformation, little tangible evidence of significant broad-based and sustainable improvements has been seen in DOD’s business operations. For DOD to successfully transform its business operations, it will need a comprehensive and integrated business transformation plan; people with the skills, responsibility, and authority to implement the plan; an effective process and related tools, such as a business enterprise architecture; and results-oriented performance measures that link institutional, unit, and individual personnel goals and expectations to promote accountability for results. The leadership and support demonstrated by Congress has been essential in the reformation of financial management in the federal government. As previously discussed, the legislative framework provided by the CFO Act and FFMIA, among others, established a solid foundation for stimulating change needed to achieve sound financial systems management. For example, in November 2002, Congress enacted the Accountability of Tax Dollars Act to extend the financial statements audit requirements of the CFO Act to additional federal agencies. Then, in October 2004, Congress added DHS to the list of CFO Act agencies and required DHS to obtain an audit opinion on its internal controls. In addition, DHS will be subject to FFMIA for fiscal year 2005 and its auditors will be required to report any FFMIA-related systems deficiencies or weaknesses identified. Sustained congressional interest in these issues has been demonstrated by the number of hearings on federal financial management and reform held over the past several years. It is critical that the various appropriations, budget, authorizing, and oversight committees hold agency top management accountable for resolving these problems and that the committees continue to support improvement efforts. The continued attention by Congress to these issues is crucial in sustaining momentum for financial management reform in the federal government. Toward this end, the Subcommittee on Government Management, Finance and Accountability, House Committee on Government Reform is currently examining the consolidation of existing federal financial management laws into legislation that would simplify, streamline, and enhance the laws governing agency financial management. These laws were primarily designed to increase financial accountability, enhance agency strategic focus, promote sound management through effective internal control, provide for effective information technology deployment, facilitate debt collection activities, and encourage better asset management. These are all interrelated management concepts, which the subcommittee intends to bring together under a single unified statute so that rules and regulations are clearly delineated for federal managers, which will enhance accountability for the federal government. Continuing problems with agencies’ financial systems make it difficult for agencies to produce reliable, useful, and timely financial information on an ongoing basis for day-to-day management. While the number of agencies receiving unqualified or “clean” opinions on their financial statements has increased since fiscal year 1997, the continued widespread noncompliance with FFMIA shows that agencies have a long way to go before having systems, processes, and controls able to routinely generate reliable, useful, and timely information. As shown by the FFMIA-related problems reported in agency audit reports, federal financial management systems are not currently able to provide federal managers with the financial data needed for effective day-to-day management of their programs or for efficient external reporting. We continue to be concerned that the full nature and scope of the problems have not yet been identified because most auditors have only provided negative assurance in their FFMIA reports. We believe the law requires auditors to provide positive assurance on FFMIA compliance. Therefore, we reaffirm our recommendation made in prior reports that OMB revise its current FFMIA guidance to require agency auditors to provide a statement of positive assurance when reporting an agency’s systems to be in substantial compliance. Such a determination will require auditors to perform a more thorough examination of their agencies’ systems. We also reaffirm our other prior recommendation for OMB to explore further clarification of the definition of “substantial compliance” in its FFMIA guidance to encourage consistent reporting among agency auditors. As we have stated in prior reports, the auditors we have interviewed expressed concerns about providing positive assurance when reporting on agency systems’ FFMIA compliance because of their belief that the meaning of substantial compliance needs to be clarified. The size and complexity of the federal government presents a formidable management challenge to modernize and improve its financial management systems that will require continued attention from the highest levels of government. We recognize that it will take time, investment, and sustained emphasis on correcting deficiencies to improve federal financial management systems to the level required by FFMIA. However, with concerted and coordinated effort, including attention from top agency management and the Congress, the federal government can make progress toward improving its financial management systems and thus achieve the goals of the CFO Act and provide accountability to the nation’s taxpayers. In written comments (reprinted in app. VI) on a draft of this report, OMB generally agreed with our assessment that while federal agencies continue to make progress in addressing financial management systems weaknesses, many agencies still need to make improvements to produce the information needed to efficiently and effectively manage day-to-day operations. As in previous years, OMB did not see the necessity of our recommendation for agency auditors to provide a statement of positive assurance when reporting agency systems to be in substantial compliance with the requirements of FFMIA. While OMB commends DOL’s auditors for performing the additional level of audit work needed to provide positive assurance of compliance with FFMIA and encourages similar efforts at other agencies, OMB stated that requiring a statement of positive assurance for all agencies would prove only marginally useful. OMB stated that the framework of performance standards established under the PMA, as well as the ongoing efforts to update policy guidance, such as the internal control requirements in OMB Circular No. A-123, provide alternative mechanisms for evaluating FFMIA compliance. The PMA and Circular No. A-123 initiatives are two examples of current OMB efforts intended to complement FFMIA’s goal of creating the full range of information needed for day-to-day management. From OMB’s perspective, these efforts together with existing audit processes can provide an accurate assessment of substantial compliance, identify deficiencies, and suggest corrective actions. While we agree that the PMA and OMB Circular No. A-123 initiatives are helping to drive improvements, auditors need to consider other aspects of financial management systems when assessing FFMIA compliance that are not fully addressed through the current reporting structure. For example, in preparing the PMA scorecard assessments, OMB officials meet with agencies to discuss a number of financial management issues and have systems demonstrations. Our concern is that some of the information provided by this approach does not come under audit scrutiny and may not be reliable. Similarly, internal control assessments performed under Circular No. A-123 are based on management’s judgment and are subject to review by independent auditors only in limited circumstances. From our perspective, an opinion by an independent auditor on FFMIA compliance would confirm that an agency’s systems substantially met the requirements of FFMIA and provide additional confidence in the information provided as a result of the PMA and Circular No. A-123 initiatives. Finally, we continue to believe that a statement of positive assurance is a statutory requirement under the act. With regard to our prior recommendation, which we reaffirmed in this report, for revised guidance that clarifies the definition of substantial compliance, OMB said that the experience obtained from helping agencies implement the standards incorporated in the PMA will allow a further refinement of the FFMIA indicators associated with substantial compliance. Therefore, OMB agreed to consider clarifying the definition of “substantial compliance” in future policy and guidance updates. As we noted in our prior reports, auditors we interviewed expressed a need for clarification regarding the meaning of substantial compliance. OMB also provided additional oral comments, which we incorporated as appropriate. We are sending copies of this report to the Chairman and Ranking Minority Member, Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs, and to the Chairman and Ranking Minority Member, Subcommittee on Government Management, Finance, and Accountability, House Committee on Government Reform. We are also sending copies to the Director of the Office of Management and Budget, the Secretary of Homeland Security, the heads of the 23 CFO Act agencies in our review, and agency CFOs and IGs. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Sally E. Thompson, Director, Financial Management and Assurance, who may be reached at (202) 512-2600 or by e-mail at thompsons@gao.gov if you have any questions. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. The policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems are defined in Office of Management and Budget (OMB) Circular No. A-127, Financial Management Systems. The components of an integrated financial management system include the core financial system, managerial cost accounting system, administrative systems, and certain programmatic systems. Administrative systems are those that are common to all federal agency operations and programmatic systems are those needed to fulfill an agency’s mission. Circular No. A-127 refers to the series of publications entitled Federal Financial Management Systems Requirements, initially issued by the Joint Financial Management Improvement Program’s (JFMIP) Program Management Office (PMO) as the primary source of governmentwide requirements for financial management systems. Appendix II lists the federal financial management systems requirements published to date. Figure 4 is the current model that illustrates how these systems interrelate in an agency’s overall systems architecture. OMB Circular No. A-127 also requires agencies to purchase commercial off- the-shelf (COTS) software that has been tested and certified through the PMO software certification process when acquiring core financial systems. The PMO’s certification process, however, does not eliminate or significantly reduce the need for agencies to develop and conduct comprehensive testing efforts to ensure that the COTS software meets their requirements. Moreover, according to the PMO, core financial systems certification does not mean that agencies that install these packages will have financial management systems that are compliant with FFMIA. Many other factors can affect the capability of the systems to comply with FFMIA, including modifications made to the PMO-certified core financial management systems software and the validity and completeness of data from feeder systems. The Federal Accounting Standards Advisory Board (FASAB) promulgates federal accounting standards that agency Chief Financial Officers use in developing financial management systems and preparing financial statements. FASAB develops the appropriate accounting standards after considering the financial and budgetary information needs of Congress, executive agencies, and other users of federal financial information and comments from the public. FASAB forwards the standards to the three sponsors—the Comptroller General, the Secretary of the Treasury, and the Director of OMB—for a 90-day review. If there are no objections during the review period, the standards are considered final, and FASAB publishes them on its Web site and in print. The American Institute of Certified Public Accountants has recognized the federal accounting standards promulgated by FASAB as being generally accepted accounting principles for the federal government. This recognition enhances the acceptability of the standards, which form the foundation for preparing consistent and meaningful financial statements both for individual agencies and the government as a whole. Currently, there are 29 Statements of Federal Financial Accounting Standards (SFFAS) and 4 Statements of Federal Financial Accounting Concepts (SFFAC). The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and for the government’s consolidated financial statements. Appendix III lists the concepts, standards, interpretations, and technical bulletins, along with their respective effective dates. FASAB’s Accounting and Auditing Policy Committee (AAPC) assists in resolving issues related to the implementation of accounting standards. AAPC’s efforts result in guidance for preparers and auditors of federal financial statements in connection with implementation of accounting standards and the reporting and auditing requirements contained in OMB’s Bulletin No. 01-09, Form and Content of Agency’s Financial Statements (Sept. 25, 2001), and Bulletin No. 01-02, Audit Requirements for Federal Financial Statements (Oct. 16, 2000). To date, AAPC has issued six technical releases, which are listed in appendix IV along with their release dates. The SGL was established by an interagency task force under the direction of OMB and mandated for use by agencies in OMB and Treasury regulations in 1986. The SGL promotes consistency in financial transaction processing and reporting by providing a uniform chart of accounts and pro forma transactions used to standardize federal agencies’ financial information accumulation and processing throughout the year; enhance financial control; and support budget and external reporting, including financial statement preparation. The SGL is intended to improve data stewardship throughout the federal government, enabling consistent reporting at all levels within the agencies and providing comparable data and financial analysis governmentwide. Congress enacted legislation, 31 U.S.C. 3512(c),(d) (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982 (FIA)), to strengthen internal controls and accounting systems throughout the federal government, among other purposes. Issued pursuant to FIA, the Comptroller General’s Standards for Internal Control in the Federal Government provides standards that are directed at helping agency managers implement effective internal control, an integral part of improving financial management systems. Internal control is a major part of managing an organization and comprises the plans, methods, and procedures used to meet missions, goals, and objectives. In summary, internal control, which under OMB’s guidance for FIA is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. In December 2004, OMB revised Circular No. A-123, Management’s Responsibility for Internal Control, to strengthen the requirements for conducting management’s assessment of internal control over financial reporting. The circular emphasized management’s responsibility for establishing and maintaining internal control to achieve the objectives of effective and efficient operations, reliable financial reporting, and compliance with laws and regulations. Effective internal control also assists in managing the changes due to shifting environments and evolving demands and priorities. As programs change and agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal control to ensure that the control objectives are being achieved. SFFAC No. 1Objectives of Federal Financial Reporting SFFAC No. 3 Management’s Discussion and Analysis SFFAC No. 4 Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government SFFAS No. 1 Accounting for Selected Assets and Liabilities SFFAS No. 2 Accounting for Direct Loans and Loan Guarantees SFFAS No. 3 Accounting for Inventory and Related Property SFFAS No. 4 Managerial Cost Accounting Concepts and Standards SFFAS No. 5 Accounting for Liabilities of the Federal Government SFFAS No. 6 Accounting for Property, Plant, and Equipment SFFAS No. 7 Accounting for Revenue and Other Financing Sources SFFAS No. 8 Supplementary Stewardship Reporting SFFAS No. 9 Deferral of the Effective Date of Managerial Cost Accounting Standards for the Federal Government in SFFAS No. 4 SFFAS No. 10 Accounting for Internal Use Software SFFAS No. 11 Amendments to Accounting for Property, Plant, and Equipment—Definitional Changes SFFAS No. 12 Recognition of Contingent Liabilities Arising from Litigation: An Amendment of SFFAS No. 5, Accounting for Liabilities of the Federal Government SFFAS No. 13 Deferral of Paragraph 65-2—Material Revenue-Related Transactions Disclosures SFFAS No. 14 Amendments to Deferred Maintenance Reporting SFFAS No. 15 Management’s Discussion and Analysis SFFAS No. 16 Amendments to Accounting for Property, Plant, and Equipment SFFAS No. 17 Accounting for Social Insurance SFFAS No. 18 Amendments to Accounting Standards for Direct Loans and Loan Guarantees in SFFAS No. 2 SFFAS No. 19 Technical Amendments to Accounting Standards for Direct Loans and Loan Guarantees in SFFAS No. 2 2003 SFFAS No. 20 Elimination of Certain Disclosures Related to Tax Revenue Transactions by the Internal Revenue Service, Customs, and Others SFFAS No. 21 Reporting Corrections of Errors and Changes in Accounting Principles SFFAS No. 22 Change in Certain Requirements for Reconciling Obligations and Net Cost of Operations SFFAS No. 23 Eliminating the Category National Defense Property, Plant, and Equipment SFFAS No. 24 Selected Standards for the Consolidated Financial Report of the United States Government SFFAS No. 25 Reclassification of Stewardship Responsibilities and Eliminating the Current Services Assessment (amended) SFFAS No. 26 Presentation of Significant Assumptions for the Statement of Social Insurance: Amending SFFAS 25 (amended) SFFAS No. 27 Identifying and Reporting Earmarked Funds SFFAS No. 28 Deferral of the Effective Date of Reclassification of the Statement of Social Insurance: Amending SFFAS 25 and 26 SFFAS No. 29 Heritage Assets and Stewardship Land No. 2 Accounting for Treasury Judgment Fund Transactions No. 3 Measurement Date for Pension and Retirement Health Care Liabilities No. 4 Accounting for Pension Payments in Excess of Pension Expense No. 5 Recognition by Recipient Entities of Receivable Nonexchange Revenue No. 6 Accounting for Imputed Intra-departmental Costs TB 2000-1 Purpose and Scope of FASAB Technical Bulletins and Procedures for Issuance TB 2002-1 Assigning to Component Entities Costs and Liabilities That Result From Legal Claims Against the Federal Government TB 2002-2 Disclosures Required by Paragraph 79(g) of SFFAS 7 TB 2003-1 Certain Questions and Answers Related to the Homeland Security Act of 2002 TR-1 Audit Legal Representation Letter Guidance TR-2 Determining Probable and Reasonably Estimable for Environmental Liabilities in the Federal Government March 15, 1998 TR-3 Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act July 31, 1999 TR-4 Reporting on Non-Valued Seized and Forfeited Property TR-5 Implementation Guidance on SFFAS No. 10: Accounting for Internal Use Software TR-6 Preparing Estimates for Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act (Amendments to TR-3) In addition to the contact named above, Kay L. Daly, Assistant Director; W. Stephen Lowrey; and Chanetta R. Reed made key contributions to this report.
The ability to produce the data needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers continues to be a challenge for most federal agencies. To help address this challenge, the Federal Financial Management Improvement Act of 1996 (FFMIA) requires the Chief Financial Officers (CFO) Act agencies to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) federal accounting standards, and (3) the U.S. Government Standard General Ledger (SGL). FFMIA also requires GAO to report annually on the implementation of the act. While most CFO Act agencies have obtained clean or unqualified audit opinions on their financial statements, the underlying financial systems remain a serious problem. Agencies still lack the capacity to create the full range of information needed for effective day-to-day management. In fiscal year 2004, auditors for 16 of the 23 CFO Act agencies reported that agencies' financial management systems failed to comply with FFMIA. Primarily six types of problems related to agencies systems were identified in the audit reports. These same types of problems have been consistently reported for agencies with noncompliant systems for a number of years. GAO views these problems with agency financial systems to be a significant challenge to improving the management of the federal government. Auditors for six agencies provided negative assurance on systems' FFMIA compliance for fiscal year 2004. This means that nothing came to their attention to indicate that financial management systems did not meet FFMIA requirements. OMB's current reporting guidance calls for negative assurance; however, GAO continues to believe that this type of reporting is not sufficient for reporting under the act. In addition, negative assurance may provide the false impression that the auditors are reporting that the agencies' systems are compliant. In contrast, auditors for the Department of Labor (DOL) provided positive assurance by reporting that DOL's financial management systems substantially complied with FFMIA requirements. In fiscal year 2005, DOL auditors plan to enhance their audit procedures to focus on the reliability and use of managerial cost information. GAO looks forward to other auditors adopting a similar reporting practice that adds more value. In addition, auditors have expressed concern about providing positive assurance because of the need to clarify the meaning of substantial compliance. OMB continues to move ahead on other initiatives to enhance financial management in the federal government. Moreover, the continuing leadership and support of Congress will be crucial in reforming financial management in the federal government.
The Forest Service, within the Department of Agriculture, manages for multiple uses 191 million acres of national forests and grasslands under a wide and complex set of laws and regulations. For fiscal year 1993, the Forest Service reported selling 4.5 billion board feet of timber from the lands for a total bid value of $774.9 million. Developing ASQs is part of a legislatively required process specified in the Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974 (16 U.S.C. 1600-1614), as amended by the National Forest Management Act (NFMA) of 1976 (16 U.S.C. 1600-1614). RPA requires the Forest Service to develop long-range planning goals for activities on rangelands and in national forests, and NFMA directs the Forest Service to develop detailed management plans for national forests and to regulate timber harvests to ensure the protection of other resources. The Forest Service has supplemented this guidance with regulations, first issued in 1979 and revised in 1982, and with a manual and handbooks for forest-level use. (See apps. I and II for further discussion of these laws, regulations, and policy guidance.) The Forest Service also has management responsibilities that extend beyond timber production, including such other activities as protecting natural resources like air, water, soils, plants, and animals for current and future generations. The Multiple Use-Sustained Yield Act of 1960 (16 U.S.C. 528-531) gives the Forest Service authority to manage lands for multiple uses and to sustain in perpetuity the outputs of various renewable natural resources. In carrying out its responsibilities, the Forest Service must also comply with other requirements for identifying and considering the effects that activities may have on natural resources. For example, the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) requires the preparation of environmental impact statements for major actions that may significantly affect the quality of the human environment. National forest management can be divided into three main processes—planning, budgeting, and (for timber resources) preparing timber sales. These processes are summarized below and explained further in appendix III. Forest Service officials use the guidance in federal laws and Forest Service regulations and policies to develop a forest-specific plan for managing lands and resources (forest plan) that explains how the various forest resources will be managed for the next 10 to 15 years. The planning process is complex, involving extensive surveys of forest resources, the use of computer models, the development of management alternatives, and substantial public participation. The process is also lengthy, taking generally 3 to 10 years to complete. Part of this process involves developing the ASQ, which is the Forest Service’s estimate of the maximum harvest consistent with sustaining many other uses of the forest. Although the ASQ covers the first 10 years of the forest plan, it is usually expressed as an annual average (i.e., one-tenth of the total ASQ). Timber sales in any year may fluctuate above or below the average annual ASQ as long as the cumulative sales for the 10-year period do not exceed the total ASQ—that is, the maximum amount to be sold over the 10-year period. Each forest’s ASQ is affected by factors unique to that forest, such as the species of trees, the proportion of the acreage devoted to timber production (as compared with other uses), and the market demand for timber. When the forest plan has been completed and put in place, forest officials monitor and evaluate the results so that the effects of implementing the plan can be measured, the measurements can be analyzed, and necessary changes, such as a change in the ASQ, can be made. Generally, 2 to 3 years before the fiscal year in which the funds will actually be spent, each of the Forest Service’s nine regions develops a budget request for its national forests. The budget requests are based partly on the overall objectives for each forest plan as well as guidance from the administration. These requests are then aggregated at the national level, where they are subject to review and change by Forest Service headquarters, the Department of Agriculture, the Office of Management and Budget, and the Congress. Yearly congressional appropriations are then passed down from Forest Service headquarters to the regions, and then from the regions to the individual forests. Preparing timber sales usually takes 3 to 8 years and consists of six steps, or “gates.” The early steps involve identifying the timber to be offered for sale and conducting environmental studies of the areas to be affected; the later steps involve advertising and selling the timber. Because timber is offered for sale from most forests each year, in any given year timber sales may be found at various steps in the process; some sales are at the beginning and others are at the last step before the timber is made available for harvest. Several factors contributed to bringing timber sales below average annual ASQs from fiscal years 1991 through 1993 at all five of the national forests we reviewed. At four of these five forests, timber sales also decreased over the 3-year period. (See app. IV for forest-by-forest totals.) For example, at the Mt. Hood National Forest, which had an average annual ASQ of 189 million board feet, ASQ-related timber sales were approximately 51 million board feet in 1991 and 38 million board feet in 1993. The Ouachita National Forest was the only forest whose timber sales were higher in 1993 than in 1991. Its ASQ is approximately 147 million board feet, and it had ASQ-related timber sales of about 40 million board feet in 1991 and 131 million board feet in 1993. Factors contributing to differences between ASQs and timber sales at the five forests we reviewed included limitations in data and estimating techniques, the emergence of new forest management issues and changing priorities, and rising or unanticipated costs associated with preparing and administering timber sales. At four of the five forests, officials said the preciseness of the ASQ was affected by limitations in data and estimating techniques. To develop the ASQ, officials said they had used the best information available at the time and a variety of estimating and computer modeling techniques. However, they noted that these estimating and computer modeling techniques carry an inherent risk of imprecision. For example, estimates of timber volumes may be based on analysis of aerial photographs and sample tracts within a forest. More detailed, on-the-ground analysis may later reveal that actual timber volumes differ somewhat from the estimated quantities, as the following examples show: After estimating ASQ volumes for planning purposes, officials at the Deschutes National Forest discovered that they had overestimated the size of the timber inventory in timber harvest areas. They had based their inventory on an average volume that might have been accurate for the forest as a whole but was not accurate within specific areas where sales were planned. To correct this weakness, they redesigned the inventory process and began implementing the changes in 1993. At the Chattahoochee-Oconee National Forest, officials said that they had identified limitations in their original estimates of the timber yield. Forest officials had included all potentially saleable trees of all species (the forest has about 40 different species of trees) in their estimates of the timber yield during the planning process. However, as they began to implement their forest plan, they found that buyers desired only some of the species. In addition, the ASQ included yields from some forest land—such as areas next to visually sensitive travelways—that could not be fully harvested. Forest officials acknowledged that including these possible yields lowered the accuracy of their ASQ estimate. To correct these problems, forest officials plan to adjust their yield estimates to include only timber with established markets and to develop a more precise way to identify acres available for harvest. Officials at the Gifford Pinchot National Forest said they believe their ASQ could have been based on an overestimate of the number of acres available for timber production. In later analyzing timber management areas, forest officials found that fewer acres were available for harvest than originally estimated. The forestwide estimates used to develop the ASQ did not consider some factors—such as wildlife habitat, sensitive plant species, or campground uses—later encountered in on-the-ground examination while preparing timber for sale. To improve the accuracy of their estimates, forest officials have proposed collecting more information before determining the number of acres available for timber production. The forest plan, which incorporates the ASQ, reflects the Forest Service’s determination at the time the plan is developed of how timber production and other uses of the forest will be managed over the next 10 to 15 years. After these decisions have been made and an ASQ has been established, however, new forest management issues and changing priorities often emerge that directly affect how the forest will be managed. These changes may also affect the amount of timber that can be sold. The most dramatic example of such changes for the forests we reviewed occurred in the Pacific Northwest Region. In mid-1990, when the forest plans containing the ASQs for the three Pacific Northwest forests were ready to be implemented, the Department of the Interior’s Fish and Wildlife Service announced its decision to list the northern spotted owl as a threatened species under the provisions of the Endangered Species Act. Much of the land inhabited by the spotted owl is managed by the Forest Service. Several environmental groups challenged the process used to implement spotted owl management, and on May 23, 1991, many timber harvests in the three forests were halted by a court injunction. Forest Service officials said this injunction and similar legal challenges were primarily responsible for the difference between ASQs and timber sales in all Pacific Northwest forests. Sharp declines in the volume of timber sold from the Gifford Pinchot National Forest illustrate the effects of challenges and the court injunction on timber sales. This forest had an average annual ASQ of 334 million board feet. In fiscal year 1991, the forest sold 110.2 million board feet of timber that was chargeable to the ASQ and had been harvested outside the owl habitat. In fiscal year 1992, that total dropped to 19.8 million board feet, and in fiscal year 1993 it further declined to 14.8 million board feet. According to the forest’s monitoring report for 1993, “the shortfall continues to be the result of the owl controversy and recent court decisions.” While the Southern forests we reviewed were not affected by an event as sweeping as the spotted owl controversy, their harvests were likewise affected by events that reflected changes in the relative priorities assigned to timber sales and other uses of the forest. These changes generally did not result in court challenges but rather in appeals filed by individuals or groups during an administrative process established by the Forest Service to review challenges to its decisions on issues ranging from the size of a forest’s ASQ to aspects of a particular timber sale. Under this process, Forest Service personnel review and decide on the appeals. At the Chattahoochee-Oconee National Forest, for example, the majority of appeals challenged individual timber sales that were below cost or had been designed without proper environmental evaluations. According to a forest official, in fiscal year 1993 a total of 10 appeals challenged 8 proposed timber sales, and in fiscal year 1994 (through June 29), a total of 44 appeals challenged 22 proposed timber sales. The Forest Service is revising its policies to respond more effectively to changing priorities for uses of the nation’s forests. On June 4, 1992, the Chief of the Forest Service announced a new policy of multiple-use ecosystem management for the national forests and grasslands. Four of the five forests in our review are included in pilot projects proposed for fiscal year 1995 as tests of ecosystem management’s potential to better ensure the sustainable long-term use of natural resources. One project addresses common problems associated with air and water quality, conservation, biological diversity, and sustainable economic growth in the southern Appalachian highlands, a region that includes the Chattahoochee-Oconee forest. In an August 1994 report on ecosystem management, we concluded that such projects afford an opportunity to test this approach to land management. The three Pacific Northwest forests we reviewed are included in another ecosystem management pilot project that could affect the current process for developing ASQs. In response to the spotted owl controversy, the administration created an interagency team to develop alternatives that would “attain the greatest economic and social contribution from the forests of the region and meet the requirements of the applicable laws and regulations.” In April 1994, the interagency team produced a land management plan based on broad land areas, such as river basins and watersheds. Forest Service officials indicated that under the new plan, although an ASQ would still be developed in order to comply with the requirements of the National Forest Management Act of 1976, individual revised forest plans might also include a “probable sale quantity” to reflect the uncertainty associated with selling timber at the ASQ. For example, for the three Pacific Northwest forests we reviewed, the new land management plan identifies an average annual probable sale quantity of 157 million board feet, as compared with the existing average annual ASQ of 621 million board feet. The difference is due primarily to the allocation of fewer acres for timber production. Forest Service officials cite the timing of the budget process, as well as new forest management issues and changing priorities, as contributing to the shortfall in the moneys available to prepare timber sales and administer harvests at ASQ levels. According to these officials, budget requests must be prepared 2 to 3 years before the funds are actually received, and emerging issues and changing priorities may render the original request insufficient, as in the following instances: At the Chattahoochee-Oconee National Forest, officials estimated that the costs per million board feet to prepare timber sales and administer harvests rose by approximately 36 percent between 1988 and 1993 when the Forest Service began to reduce its use of clearcutting and increase its use of other harvesting methods. These other harvesting methods, such as single-tree and group selection methods, require Forest Service personnel to mark each tree planned for harvest. Because this and other activities increase the cost and time associated with preparing each timber sale, available staff and funds cannot be spread over as many sales as originally planned. At the Mt. Hood National Forest, officials said that in recent years they had underestimated their costs to prepare timber sales and administer harvests when developing their annual budget requests. They noted that between fiscal years 1990 and 1991, preparation and administration costs rose by about 39 percent, and between fiscal years 1991 and 1992, these costs rose by an additional 147 percent. Factors contributing to these increases in costs included requirements for (1) conducting surveys of cultural and historical resources and of threatened and endangered species that took more time and resources than had been anticipated and (2) switching from clearcutting to other harvesting methods and shifting timber harvests out of owl habitat to comply with court injunctions. While preparation and administration costs increased by only 8 percent between fiscal years 1992 and 1993, forest officials believe that they will increase by another 51 percent between fiscal years 1993 and 1995 as the new Pacific Northwest forest plan is implemented. Given the uncertainties inherent in developing ASQs, shortfalls between ASQs and timber sales should be expected. An ASQ is, to some extent, imprecise because it is based on estimating techniques and forestwide data rather than on detailed, on-the-ground data from the timber sale area. Even more significantly, however, an ASQ represents a planning “snapshot” that can quickly become outdated as new forest management issues emerge and priorities change. As the value placed on timber production shifts toward other forest uses, ASQs established under earlier, somewhat different priorities may no longer reflect estimated sale quantities. Although forest planning allows ASQs to be updated as needed, the experience of the five forests we reviewed indicates that events may quickly overtake even revised ASQs. We discussed the facts and observations contained in a draft of this report with officials from Forest Service headquarters, including the Deputy Director, Budget Analyst, Staff Assistant, and Interdisciplinary Forester (Forest Plans) within the Timber Management Staff; the Planning Specialist within the Land Management Planning Staff; and the Interdisciplinary Analyst within the Program Planning and Development Staff. We also discussed the facts and observations with senior regional and forest officials from the two regions that we visited. In general, these officials agreed that the information was accurate, and we have incorporated changes that they suggested where appropriate. To determine why timber sales often fall short of ASQs, we met with Timber Management, Program Development and Budget, and Land Management Planning officials from Forest Service headquarters; the Pacific Northwest Regional Office in Portland, Oregon; and the Southern Regional Office in Atlanta, Georgia. We also met with Forest Service officials from the Chattahoochee-Oconee, Deschutes, Gifford Pinchot, Mt. Hood, and Ouachita National Forests. We selected these two regions because they had the largest timber sales for fiscal year 1993. We judgmentally selected the specific forests because of their geographical proximity to the regional offices. In addition, we selected the Ouachita National Forest because it had begun to practice ecosystem management before the Forest Service decided to implement this land management approach agencywide. We reviewed documentation provided by these officials, including forest plans, budget requests, and monitoring reports. We did not, however, evaluate the ASQ calculations made for the five forests but used the figures cited in the forest plans as a starting point for discussing how the figures were determined. We also discussed the budgeting process with officials from the Office of Management and Budget and the Department of Agriculture in Washington, D.C. We discussed forest planning procedures with representatives of the Congressional Research Service and reviewed additional documents on forest planning from the Office of Technology Assessment. In addition, to determine the role the Congress plays in the budget deliberations, we met with staff from both the House and Senate appropriations subcommittees who review the Forest Service’s budget requests. We conducted our review between August 1993 and August 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Agriculture, and the Chief of the Forest Service. We will make copies available to others upon request. This work was done under the direction of James K. Meissner, Associate Director for Timber Management Issues, who may be reached at (206) 287-4810. Other major contributors to this report are listed in appendix V. To provide the President with the authority to create forest reserves out of forested public domain lands. To identify purposes for creating forest reserves, including improving and protecting forests within reservations, protecting water supplies, and providing the public with a continuous supply of timber. To provide a constant source of funding for the reforestation of harvested lands and to protect and improve nontimber resources in timber sale areas. To ensure the management of national forest resources and products for multiple uses and sustained yield. To preserve natural areas of national forests for recreation and other uses. Prohibits timber harvesting in these areas. To preserve certain rivers and surrounding areas. Limits timber harvesting in the surrounding areas. National Environmental Policy Act (NEPA) To require federal agencies to evaluate and document the impact on the environment of significant land management activities. To protect plant and animal species whose survival is in jeopardy. Forest and Rangeland Renewable Resources Planning Act (RPA) To provide guidance for establishing long-range resource planning goals for the national forests. National Forest Management Act (NFMA) To provide guidance for developing forest plans, regulating activities, and allowing public participation in planning. To place limits on activities that would exceed federal or state water quality standards in order to enhance water quality. The Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974, as amended by the National Forest Management Act (NFMA) of 1976, provides the basic legislative guidance to the Forest Service for planning and managing resources in the national forests. RPA requires the Forest Service to develop long-range planning goals for activities on rangelands and in national forests, and NFMA directs the Forest Service to develop detailed management plans for national forests and to regulate timber harvests to ensure the protection of other resources. NFMA also required the Forest Service to develop regulations for implementing the planning goals established in RPA and NFMA. RPA makes resource management unit plans a statutory requirement through which the Forest Service will provide comprehensive information on the forest’s abilities to produce resources, such as fish and wildlife habitat, and goods and services, such as wood for lumber and opportunities for recreation. RPA directs the Forest Service to establish long-term resource planning goals for rangelands and forests. It requires the Forest Service to (1) assess the renewable resources on all lands every 10 years, (2) recommend a program for renewable resource activities on Forest Service lands every 5 years, and (3) annually report on the implementation of the recommended program and the accomplishments of the program relative to the assessment. RPA also requires the President to submit to the Congress, together with the assessment and the recommended program, a statement of policy that will guide the Forest Service’s budget requests for implementing the 5-year recommended program. In 1975, the Circuit Court of Appeals for the Fourth Circuit affirmed a 1973 district court decision constraining the Monongahela National Forest in West Virginia to sell only individually marked “dead, physiologically mature, and large growth” trees. The Forest Service decided to extend this decision to all nine national forests under the circuit court’s jurisdiction. The Forest Service estimated that the decision, which was based on the circuit court’s interpretation of the Organic Act of 1897, would reduce national forest timber harvests by 50 percent if applied nationwide. To preclude this reduction and to ensure the use of scientifically accepted forestry measures to sustain the yield of natural resources, the Congress enacted NFMA. All but 1 of the first 12 sections of NFMA amend RPA. For example, NFMA provides more specific guidance to the Secretary of Agriculture and the Forest Service for developing and implementing long-range planning goals for national forests. NFMA goals include improving the management of national forests and facilitating the public’s involvement in and congressional oversight of the process. Specifically, NFMA requires that the Forest Service (1) develop integrated land and resource management plans (forest plans) for national forests using interdisciplinary teams, (2) regulate timber management activities in order to protect other resources, and (3) allow the public to participate in the development, review, and revision of the forest plans. In addition, NFMA requires that the Forest Service limit the sale of timber from each national forest to no more than an amount that could be harvested annually on a long-term sustained-yield basis. NFMA also requires the Secretary of Agriculture to develop and issue planning regulations to assist Forest Service regions and national forests in developing and maintaining forest plans. The regulations—completed in 1979 and revised in 1982—establish a process for developing, adopting, and revising forest plans. The regulations also provide guidance on the type of information to be included in the plans, such as multiple-use goals and objectives. In addition, they establish 14 principles to guide planning, including the following: Recognize that the national forests are ecosystems and their management for goods and services requires an awareness and consideration of the interrelationships among plants, animals, soil, water, air, and other environmental elements within such ecosystems. Protect and, where appropriate, improve the quality of renewable resources. Preserve important historic, cultural, and natural aspects of our national heritage. Provide for the safe use and enjoyment of the forest resources by the public. Use a systematic, interdisciplinary approach to ensure coordination and integration of planning activities for multiple-use management. Encourage early and frequent public participation. Respond to changing conditions of the land and other resources and to changing social and economic demands of the American people. The regulations also define the allowable sale quantity (ASQ) as the amount of timber that could be planned for sale from the area of suitable land during the first period of the forest plan—one decade. Essentially, the ASQ is the amount of timber that could be sold and harvested during the first decade without exceeding the amount of timber that could be harvested on a long-term sustained-yield basis. The Forest Service developed and included guidance in its manual and handbooks to provide national forest personnel with further direction for implementing RPA and NFMA. The manual contains general policy rules for forest planning, while the handbooks provide detailed instructions for developing and implementing forest plan activities. For example, the Forest Service manual requires that national forests use FORPLAN, a Forest Service analytical model, as the primary analytical tool for assessing management activities during forest planning, while the resource inventory handbook provides standards, definitions, and specifications for conducting timber inventories. Each Forest Service region provides additional guidance to the forests under its jurisdiction to clarify general guidance from headquarters and to suggest ways of incorporating factors that are unique to the region and its forests. For example, the Pacific Northwest Region provides the forests with guidance on identifying spotted owl habitat within their boundaries and on ensuring that Columbia Basin forests have a consistent approach in developing habitat capability indicators for smolt (young salmon migrating to the sea). National forest management can be divided into three main processes: (1) planning, (2) budgeting, and (3) for timber resources, preparing timber sales. In addition, forest managers monitor and evaluate the results of their activities and use this information to determine whether changes in their management plans are needed. Timber is one of many resources assessed in a forest’s land and resource management plan (forest plan). Besides timber, a forest plan includes such other resources as (1) outdoor recreational facilities (for example, campgrounds and hiking trails), (2) rangelands for providing forage to livestock and wildlife, and (3) wildlife and fish habitat for the various species dependent on the forest environment. The plan specifies how these multiple resources are to be managed so to maximize net public benefits in an environmentally sound manner. To develop forest plans, the Forest Service follows a complicated process set forth in the laws, regulations, and policies discussed in appendixes I and II. A plan’s development rests mainly with an interdisciplinary team of biologists, foresters, soil specialists, and others. The forest supervisor—the person in direct charge of a forest—also provides considerable direction in determining what issues and concerns the team will address. In addition, public participation is sought at various stages throughout the process. For planning purposes, the ASQ is the maximum amount of timber that can be sold from the forest for the next 10 years on a sustained-yield basis. However, in day-to-day usage, the ASQ is usually expressed as an average annual ASQ—that is, as one-tenth of the total. Actual timber sales, however, can fluctuate above or below this average annual amount as long as the sales for the 10-year period do not exceed the total ASQ. To develop the ASQ, the interdisciplinary team determines such information as the species, age, size, number, and location of the trees in the forest. This information helps the team identify land capable of producing trees of commercial value within the period covered by the plan. Because Forest Service regulations require the team to have access to the best available inventory data in preparing the ASQ, the Forest Service may have to conduct special inventories or studies to assemble adequate information. Identifying land suitable for timber production is part of an overall analysis that considers timber production in relation to other forest resources. This analysis responds to the legal requirement to maximize net public benefits—that is, the long-term value to the nation of all outputs and positive effects (benefits) minus the associated inputs and negative effects (costs). As specified in Forest Service planning regulations, lands are not considered suitable for timber production if (1) less than 10 percent of the area has trees, (2) the area cannot begin regrowing trees within 5 years of the harvest, (3) irreversible damage will occur to the land or other resources if the trees are harvested, or (4) land has been withdrawn from timber production by an Act of Congress, the Secretary of Agriculture, or the Chief of the Forest Service. Because maximizing net public benefits often involves making choices between various goals, the initial outcome of this overall analysis is a broad range of alternatives describing the different ways the forest can be managed to address and respond to major public issues, management concerns, and resource opportunities. The primary purpose in developing alternatives is to provide an adequate basis for identifying the alternative that comes nearest to maximizing net public benefits. Under these criteria, the alternatives list (1) the multiple-use goals and objectives that describe the desired future condition of the forest, (2) the goods and services expected to be produced, (3) the standards and guidelines for managing resources, and (4) the conditions and uses that result from the planned activities, such as timber sales. As part of its discussion of land management objectives, each alternative includes an ASQ. Each alternative specifies a particular emphasis, such as protecting wildlife habitat or promoting recreation, and each alternative may have a different ASQ. For example, an alternative that emphasizes wilderness protection will have a lower ASQ than an alternative that emphasizes timber production. The ASQ for each alternative is calculated using a forest planning model called FORPLAN. The model will help analyze such factors as the forest’s ability to supply goods and services in response to society’s demands, as well as each land management alternative’s effects, such as present net value, social and economic impacts, and outputs of goods and services. The team supplements the FORPLAN results, as needed, with input from forestry experts and from the public. The planning process culminates in the selection of an alternative for implementation. The team estimates and compares the physical, biological, economic, and social effects of implementing each alternative. The team looks at such things as the expected outputs for the planning periods, the direct and indirect benefits and costs, and the resource trade-offs and opportunity costs associated with achieving the objectives. The team then makes recommendations to the forest supervisor, who reviews the recommendations and forwards a preferred alternative to the regional forester, who is in charge of all of the forest supervisors in the Forest Service region. Once the regional forester approves the preferred alternative, the forest plan is completed, and the ASQ is established for the next 10 years. Although this process has clearly defined requirements, it is also open-ended in that the ASQ as well as other elements of the forest plan can be changed at any time during the 10-year period if the forest supervisor determines that a change is necessary. Changes are made through amendments or revisions to the forest plan to accommodate such things as shifts in land management policy or other significant changes. Before forest officials develop their budget requests, they receive written instructions from Forest Service headquarters on what to include in their requests. These instructions communicate the agency’s priorities in light of such factors as the administration’s guidance on the agency’s budget targets. The administration’s guidance can be as specific as a letter from the President or as general as a forecasted budget total for the agency. The instructions are also formulated with input from regional foresters, who recommend to the Chief of the Forest Service which program goals should be emphasized—for example, ecosystem management or the operation and maintenance of recreational facilities. Regional foresters also identify levels of data to be collected and (until fiscal year 1996) specific resource targets. For fiscal year 1996, specific resource targets were eliminated. After receiving these instructions, forest officials develop their budget requests. The budget process actually begins 2 to 3 years before the fiscal year in which the funds will be spent. For example, the process for developing a forest’s fiscal year 1995 budget request probably began in fiscal year 1993 or earlier. Forest officials also develop their requests as a range of funding alternatives in accordance with headquarters guidance. For example, fiscal year 1995 budget submissions from Pacific Northwest forests included three funding levels: (1) a base level equal to the fiscal year 1992 appropriation, adjusted for inflation; (2) a reduced level, 5 percent lower than the base level; and (3) an increased level, 20 percent higher than the base level. Budgets prepared for fiscal years up to 1995 also included a funding level based on the amount the forest supervisor believed would be necessary to implement the forest plan’s objectives. The budget request for each forest is subject to levels of internal Forest Service review. The request is first forwarded to the regional office, where it is reviewed for conformity with budget instructions and regional priorities. The regional office makes any changes it deems necessary, consolidates the request for the forest with those for other forests in the region, and adds the regional office’s own estimated costs for supporting the forests and implementing the regional office’s own actions and program initiatives. The completed request, which displays the request for each forest as well as the aggregated numbers, is forwarded to headquarters. There, a similar review of regional requests is conducted. The regional budgets approved by headquarters are aggregated, and headquarters adds the costs it expects to incur in carrying out its administrative and monitoring activities and in initiating any national programs. This process results in an overall Forest Service request. This request may be changed by the Department of Agriculture (the Forest Service’s parent agency), the Office of Management and Budget, or the Congress through the appropriations process. However, budget reviewers at these levels do not have forest-level data to determine the funds needed to attain the goals for the individual forests; instead they review overall agency goals. For example, according to an official from the Department of Agriculture, the agency considers such things as the number of Forest Service employees, the agency’s programs, and national goals like implementing ecosystem management in the Pacific Northwest. According to an official from the Office of Management and Budget, the agency considers whether, in areas such as timber production, the budget reflects policies that are consistent with the administration’s broader policies and objectives. The Office of Management and Budget also reviews the cost-effectiveness of the Forest Service’s production of timber for sale by comparing projected cost estimates with the most recent actual costs. At the congressional level, the administration’s request is subject to change in the committee process and in floor debate. Once a funding level for the Forest Service is approved, the appropriations information is then passed in reverse, from the Congress down to headquarters, along with congressional directives specifying how some of the funds will be spent. Headquarters divides and allocates the funds to the regions, and, in turn, each region allocates funds to each forest, usually well into the fiscal year. Until the actual funding is received, forests will use the region’s estimated appropriation level as a base, as well as the forest plan’s priorities and historical trends. Before fiscal year 1993, in providing funds for preparing and administering timber sales, the Congress also specified the volume of timber it expected the Forest Service to offer for sale. Now, the expected volume is based on each forest’s ability to sell and harvest timber. Regulations require that each forest plan contain a 10-year timber sale schedule identifying the quantity of timber planned for sale from an area of suitable forest land in order to attain the ASQ. Individual timber sales are prepared using a six-step process, referred to as the timber sale gate system. Table III.1 summarizes the six gates. The timber the forest intends to sell is identified, and a position statement is developed setting forth the purpose and reasons for the timber sale. For continuing sales, timber sale design alternatives are developed, a site-specific environmental and economic analysis is completed for the proposed sale, and the approving official decides whether to proceed with the proposed sale. The sale area is physically marked, and data are collected to help prepare the timber appraisal, contract, offering, and sale area improvement plan. The timber is appraised and advertised, and a sample contract is prepared. Bids by potential buyers are reviewed, and an auction is held if required. The contract is signed by both the timber purchaser and the Forest Service. The entire gate process for selling timber normally takes 3 to 8 years, depending on the size, location, and complexity of the sale; access to the area; and the design of the transportation system. Basic decisions about whether to continue the sale occur both at gate 1 and gate 2. Gate 1 generally occurs in the first year; gate 2 usually occurs between the second and fifth year of sales that continue beyond gate 1. Public comments are actively sought by the Forest Service throughout gates 1 and 2. Comment after a decision has been made comes through the administrative appeal system, once a decision notice has been signed by the approving official at gate 2. According to a forest official, administrative appeals or lawsuits can add 4 months to 4 years to the entire process. Gate 3 usually occurs during the third to eighth year of the sale, depending on the complexity of the sale. The remaining gates generally take place during the last year of the sale process. Once the timber contract is awarded in gate 6, the timber purchaser prepares the site to harvest the timber—a process that can take 3 to 5 years to complete. Timber management is not completed when the timber is sold. Forest officials track the results of their planning and timber management activities so that the effects of implementing the plan can be measured, the measurements can be analyzed, and necessary changes can be made. Within the Forest Service, forest supervisors use monitoring information—as well as Forest Service reports and special studies or litigation and appeal results—to evaluate whether the implementation process has achieved the forest plan’s objectives. If the evaluation indicates that the implementation process has failed to achieve the plan’s objectives or if new information—such as a decrease in wildlife habitat—indicates that the plan’s objectives should be revised, then the forest supervisor may amend or revise the forest plan. If the forest supervisor decides that an event—such as a decrease in the forest’s ability to produce the ASQ—is significant, then forest officials must follow the same procedure as is required to develop and approve a forest plan. If the event is insignificant—such as the acquisition of additional forest land—then such an extensive effort is not required and the amendment can be implemented after the public has been properly notified and NEPA procedures have been satisfactorily completed. NFMA requires that a forest plan be revised at least every 15 years; however, the plan can be revised at any time. A forest supervisor can request a plan’s revision when forest conditions or demands have changed significantly or when changes in RPA policies, goals, and objectives significantly affect the forest’s programs. Revisions have to be in accordance with the requirements for developing and approving a forest plan, through the completion of the entire forest plan process, and must be approved by the regional and headquarters offices. Table IV.1 shows the volume of timber sold (not including sales of forest products such as Christmas trees and firewood) and the average annual ASQ for the two Southern Region forests we reviewed. These two forests implemented their ASQs in 1986 and 1987. Timber sales were below average annual ASQs in all years since the ASQs were implemented except (for the Ouachita National Forest) in fiscal years 1987 and 1988. Table IV.1: Comparison of Average Annual ASQ and ASQ-Related Timber Sale Volumes for Southern Region Forests in GAO’s Review Fiscal year in which the ASQ was implemented. Not applicable because the ASQ was not implemented until 1987. Table IV.2 shows the volume of timber sold (not including sales of forest products such as Christmas trees and firewood) and the average annual ASQ for the three Pacific Northwest Region forests we reviewed. These forests implemented their ASQs in 1991. Timber sales were below average annual ASQs in all years since the ASQs were implemented. Volume in millions of board feet Deschutes (1991) Gifford Pinchot (1991) Mt. Hood (1991) The maximum volume of timber that may be sold on a sustained-yield basis from the area of suitable land covered by the forest plan for a time period specified by the plan. This volume is usually expressed on an annual basis as the “average annual allowable sale quantity.” A board foot, a standard measure of timber, equals the amount of wood in an unfinished board 1 inch thick, 12 inches long, and 12 inches wide. Clearcutting is a harvesting method that involves removing all trees from a timber harvest site at one time. Ecosystem management is a new, broader approach to managing the nation’s lands and natural resources. Ecosystem management recognizes that plant and animal communities are interdependent and interact with their physical environment (soil, water, and air) to form distinct ecological units called ecosystems that span federal and nonfederal lands. Any species of animal or plant as defined by the Endangered Species Act that is in danger of extinction throughout all or a significant portion of its range. Land at least 10 percent occupied by forest trees of any size or formerly having had such tree cover and not currently developed for nonforest use. A land management plan designed and adopted to guide forest management activities on a national forest. A method of harvesting timber in which small groups of trees are removed from an area annually or periodically. A group of people trained in different scientific disciplines assembled to solve a problem or perform a task. The team is assembled out of recognition that no one discipline can provide the broad background needed to adequately solve the complex problem. The management of the various renewable resources of the national forest system to ensure their use in a combination that will best meet the needs of the public. A best assessment of the average amount of timber likely to be available for sale annually in a planning area over the next 10 years. A resource that may be used indefinitely if the rate of use does not exceed the resource’s ability to renew the supply. The quantity of timber planned for sale, by time period, from an area of suitable land covered by a forest plan. The first period, usually a decade, provides the allowable sale quantity. The harvesting of selected individual trees of all sizes. The appropriateness of applying certain resource management practices to a particular area of land, as determined by an analysis of the economic and environmental consequences and of the alternative uses forgone. The volume of timber that a forest can produce continuously from a given intensity of management. Any species of animal or plant as defined by the Endangered Species Act that is likely to become an endangered species throughout all or a significant portion of its range within the foreseeable future. Administering sale or use conditions, monitoring effects, and harvesting and removing forest products. A listing of the location, quantity, condition, and growth of trees on forest lands. Preparing and offering timber for sale and awarding a sale. The volume of timber expected to be produced under a certain set of conditions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on timber sales in five national forests between 1991 and 1993, focusing on: (1) whether the Forest Service met its allowable sale quantity (ASQ) for the five forests; and (2) why the quantity of timber sold from the national forests was sometimes substantially below ASQ. GAO found that: (1) timber sales for each of the 5 forests reviewed were significantly below the average ASQ between 1991 and 1993; (2) factors contributing to the Forest Service's inability to meet ASQ included the lack of adequate data and estimating techniques to base ASQ, the emergence of new and changing forest management priorities, and rising or unanticipated costs associated with preparing and administering timber sales; (3) forest officials at one of the five forests overestimated the size of the timber inventory and improperly based the inventory on average volumes rather than on the specific parts of the forest where timber sales were being prepared; and (4) ASQ were reduced in Pacific northwest forests after the northern spotted owl was listed as an endangered species and much of the proposed harvest areas were set aside for its habitat.
DHS is the lead department involved in securing our nation’s homeland. Its mission includes, among other things, leading the unified national effort to secure the United States, preventing and deterring terrorist attacks, and protecting against and responding to threats and hazards to the nation. As part of its mission and as required by the Homeland Security Act of 2002, the department is also responsible for coordinating efforts across all levels of government and throughout the nation, including with federal, state, tribal, local, and private sector homeland security resources. As we have previously reported, DHS relies extensively on information technology (IT), such as networks and associated system applications, to carry out its mission. Specifically, in our recent report, we reported that the department identified 11 major networks it uses to support its homeland security functions, including sharing information with state and local governments. Examples of such DHS networks include the Homeland Secure Data Network, the Immigration and Customs Enforcement Network, and the Customs and Border Protection Network. In addition, the department has deployed HSIN, a homeland security information- sharing application that operates on the public Internet. As shown in table 1, of the 11 networks, 1 is categorized as Top Secret, 1 is Secret, 8 are Sensitive but Unclassified, and 1 is unclassified. HSIN is considered Sensitive but Unclassified. As the table shows, some of these networks are used solely within DHS, while others are also used by other federal agencies, as well as state and local governments. In addition, the total cost to develop, operate, and maintain these networks and HSIN in fiscal years 2005 and 2006, as reported by DHS, was $611.8 million. Of this total, the networks accounted for the vast majority of the cost: $579.4 million. DHS considers HSIN to be its primary communication application for transmitting sensitive but unclassified information. According to DHS, this network is an encrypted, unclassified, Web-based communications application that serves as DHS’s primary nationwide information-sharing and collaboration tool. It is intended to offer both real-time chat and instant messaging capability, as well as a document library that contains reports from multiple federal, state, and local sources. Available through the application are suspicious incident and pre-incident information and analysis of terrorist threats, tactics, and weapons. The application is managed within DHS’s Office of Operations Coordination. HSIN includes over 35 communities of interest, such as emergency management, law enforcement, counterterrorism, individual states, and private sector communities. Each community of interest has Web pages that are tailored for the community and contain general and community-specific news articles, links, and contact information. The community Web pages also provide access to other resources, such as the following: ● Document library. Users can search the entire document library within the communities they have access to. ● Discussion threads. HSIN has a discussion thread (or bulletin board) feature that allows users to post information that other users should know about and post requests for information that other users might have. Community administrators can also post and track tasks assigned to users during an incident. ● Chat tool. HSIN’s chat tool, known as Jabber, is similar to other instant message and chat tools—with the addition of security. Users can customize lists of their coworkers and send messages individually or set up chat rooms for more users. Other features include chat logs (which allow users to review conversations), timestamps, and user profiles. State and local governments have similar IT initiatives to carry out their homeland security missions, including sharing information. A key state and local-based initiative is the Regional Information Sharing Systems (RISS) program. The RISS program helps state and local jurisdictions to, among other things, share information in support of their homeland security missions. This nationwide program, operated and managed by state and local officials, was established in 1974 to address crime that operates across jurisdictional lines. The program consists of six regional information analysis centers that serve as regional hubs across the country. These centers offer services to RISS members in their regions, including information sharing and research, analytical products, case investigation support, funding, equipment loans, and training. Funding for the RISS program is administered through a grant from the Department of Justice. As part of its information-sharing efforts, the RISS program operates two key initiatives (among others): the RISS Secure Intranet (RISSNET) and the Automated Trusted Information Exchange (RISS ATIX): ● Created in 1996, RISSNET is intended as a secure network serving member law enforcement agencies throughout the United States and other countries. Through this network, RISS offers services such as secure e-mail, document libraries, intelligence databases, Web pages, bulletin boards, and a chat tool. ● RISS ATIX offers services similar to those offered by RISSNET to agencies beyond the law enforcement community, including executives and officials from governmental and nongovernmental agencies and organizations that have public safety responsibilities. RISS ATIX is partitioned into 39 communities of interest, such as critical infrastructure, emergency management, public health, and government officials. Members of each community of interest contribute information to be made available within each community. According to RISS officials, the RISS ATIX application was developed in response to the events of September 11, 2001; it was initiated in 2002 as an application to provide tools for information sharing and collaboration among public safety stakeholders, such as first responders and schools. As of July 2006, RISS ATIX supported 1,922 users beyond the traditional users of RISSNET. RISS ATIX uses the technology of RISSNET to offer services through its Web pages. The pages are tailored for each community of interest and contain community-specific news articles, links, and contact information. The pages also provide access to the following features: ● Document library. Participants can store and search relevant documents within their community of interest. ● Bulletin board. The RISS ATIX bulletin board allows users to post timely threat information in discussion forums and to view and respond to posted information. Users can post documents, images, and information related to terrorism and homeland security, as well as receive DHS information, advisories, and warnings. According to RISS officials, the bulletin boards are monitored by a RISS moderator to relay any information that might be useful for other communities of interest. ● Chat tool. ATIXLive is an online, real-time, collaborative communications information-sharing tool for the exchange of information by community members. Through this tool, users can post timely threat information and view and respond to messages posted. ● Secure e-mail. RISS ATIX participants have access to e-mail that can be used to provide alerts and related information. According to RISS, this is done in a secure environment. The need to improve information sharing as part of a national effort to improve homeland security and preparedness has been widely recognized, not only to improve our ability to anticipate and respond to threats and emergencies, but to avoid unnecessary expenditure of scarce resources. In January 2005, and more recently in January 2007, we identified establishing appropriate and effective information-sharing mechanisms to improve homeland security as a high-risk area. The Office of Management and Budget (OMB) has also issued guidance that stresses the importance of information sharing and avoiding duplication of effort. Nonetheless, although this area has received increased attention, the federal government faces formidable challenges in sharing information among stakeholders in an appropriate and timely manner. As we concluded in October 2005, agencies can help address these challenges by adopting and implementing key practices, related to OMB’s guidance, to improve collaboration, such as establishing joint strategies and addressing needs by leveraging resources and developing compatible policies, procedures, and other means to operate across agency boundaries. Based on our research and experience, these practices are also relevant for collaboration between federal agencies and other levels of government (e.g., state, local). Until these coordination and collaboration practices are implemented, agencies face the risk that effective information sharing will not occur. Congress and the Administration have made several efforts to address the challenges associated with information sharing. In particular, as we reported in March 2006, the President initiated an effort to establish an Information Sharing Environment that is to combine policies, procedures, and networks and other technologies that link people, systems, and information among all appropriate federal, state, local, and tribal entities and the private sector. In November 2006, in response to congressional direction, the Administration issued a plan for implementing this environment and described actions that the federal government intends—in coordination with state, local, tribal, private sector, and foreign partners—to carry out over the next 3 years. DHS did not fully adhere to the previously mentioned key practices in coordinating its efforts on HSIN with key state and local information-sharing initiatives. The department’s limited use of these practices is attributable to a number of factors: in particular, after the events of September 11, 2001, the department expedited its schedule to deploy HSIN capabilities, and in doing so, it did not develop an inventory of key state and local information initiatives. Until the department fully implements key coordination and collaboration practices and guidance, it faces, among other things, the risk that effective information sharing is not occurring. DHS has efforts planned and under way to improve coordination and collaboration, including implementing the recommendations in our recent report. In developing HSIN, DHS did not fully adhere to the practices related to OMB’s guidance. First, although DHS officials met with RISS program officials to discuss exchanging terrorism-related documents, joint strategies for meeting mutual needs by leveraging resources have not been fully developed. DHS did not engage the RISS program to determine how resources could be leveraged to meet mutual needs. According to RISS program officials, they met with DHS twice (on September 25, 2003, and January 7, 2004) to demonstrate that their RISS ATIX application could be used by DHS for sharing homeland security information. However, communication from DHS on this topic stopped after these meetings, without explanation. According to DHS officials, they did not remember the meetings, which they attributed to the departure from DHS of the staff who had attended. In addition, although DHS initially pursued a limited strategy of exchanging selected terrorism-related documents with the RISS program, the strategy was impeded by technical issues and by differences in what each organization considers to be terrorism information. For example, the exchange of documents between HSIN and the RISS program stopped on August 1, 2006, because of technical problems with HSIN’s upgrade to a new infrastructure. As of May 3, 2007, the exchange of terrorism-related documents had not yet resumed, according to HSIN’s program manager. This official also stated that the program is currently working to fix the issue with the goal of having it resolved by June 2007. Finally, DHS has yet to fully develop coordination policies, procedures, and other means to operate across agency boundaries with the RISS program. DHS has not fully developed such means to operate with the RISS program and leverage its available technological resources. Although an operating agreement was established to govern the exchange of terrorism-related documents, according to RISS officials, it did not cover the full range of information available through the RISS program. The extent of DHS’s adherence to key practices (and the resulting limited coordination) is attributable to DHS’s expedited schedule to deploy an information-sharing application that could be used across the federal government in the wake of the September 11 attacks; in its haste, DHS did not develop a complete inventory of key state and local information initiatives. According to DHS officials, they still do not have a complete inventory of key state and local information- sharing initiatives. DHS’s Office of Inspector General also reported that DHS developed HSIN in a rapid and ad hoc manner, and among other things, did not adequately identify existing federal, state, and local resources, such as RISSNET, that it could have leveraged. Further, DHS did not fully understand the RISS program. Specifically, DHS officials did not acknowledge the RISS program as a state and local based program with which to partner, but instead considered it to be one of many vendors providing a tool for information sharing. In addition, DHS officials believed that the RISS program was solely focused on law enforcement information and did not capture the broader terrorism-related or other information of interest to the department. Because of this limited coordination and collaboration, DHS is at increased risk that effective information sharing is not occurring. The department also faces the risk that it is developing and deploying capabilities on HSIN that duplicate those being established by state and local agencies. There is evidence that this has occurred with respect to the RISS program. Specifically: ● HSIN and RISS ATIX currently target similar user groups. DHS and the RISS program are independently striving to make their applications available to user communities involved in the prevention of, response to, mitigation of, and recovery from terrorism and disasters across the country. For example, HSIN and RISS ATIX are being used and marketed for use at state fusion centers and other state organizations, such as emergency management agencies across the country. ● HSIN and RISS applications have similar approaches for sharing information with their users. For example, on each application, users from a particular community—such as emergency management—have access to a portal or community area tailored to the user’s information needs. The community-based portals have similar features focused on user communities. Both applications provide each community with the following features: ● Web pages. Tailored for communities of interest (e.g., law enforcement, emergency management, critical infrastructure sectors), these pages contain general and community-specific news articles, links, and contact information. ● Bulletin boards. Participants can post and discuss information. ● Chat tool. Each community has its own online, real-time, interactive collaboration application. ● Document library. Participants can store and search relevant documents. According to DHS officials, including the HSIN program manager, the department has efforts planned and under way to improve coordination. For example, the department is in the process of developing an integration strategy that is to include enhancing HSIN so that other applications and networks can interact with it. This would promote integration by allowing other federal agencies and state and local governments to use their preferred applications and networks—such as RISSNET and RISS ATIX—while allowing DHS to continue to use HSIN. Other examples of improvements either begun or planned include the following: ● The formation of an HSIN Mission Coordinating Committee, whose roles and responsibilities are to be defined in a management directive. It is expected to ensure that all HSIN users are coordinated in information-sharing relationships of mutual value. ● The recent development of engagement, communications, and feedback strategies for better coordination and communication with HSIN, including, for example, enhancing user awareness of applicable HSIN contact points and changes to the system. ● The reorganization of the HSIN program management office to help the department better meet user needs. According to the program manager, this reorganization has included the use of integrated process teams to better support DHS’s operational mission priorities as well as the establishment of a strategic framework and implementation plan for meeting the office’s key activities and vision. ● The establishment of a HSIN Advisory Committee to advise the department on how the HSIN program can better meet user needs, examine DHS’s processes for deploying HSIN to the states, assess state resources, and determine how HSIN can coordinate with these resources. In addition to these planned improvements, DHS has agreed to implement the recommendations in our recent report. Specifically, we recommended that the department ensure that HSIN is effectively coordinated with key state and local government information-sharing initiatives. We also recommended that this include (1) identifying and inventorying such initiatives to determine whether there are opportunities to improve information sharing and avoid duplication, (2) adopting and institutionalizing key practices related to OMB’s guidance on enhancing and sustaining agency coordination and collaboration, and (3) ensuring that the department’s coordination efforts are consistent with the Administration’s recently issued Information Sharing Environment plan. In response to these recommendations, DHS described actions it was taking to implement them. (The full recommendations and DHS’s written response to them are in the report.) In closing, DHS has not effectively coordinated its primary information-sharing system with two key state and local initiatives. Largely because of the department’s hasty approach to delivering needed information-sharing capabilities, it did not follow key coordination and collaboration practices and guidance or invest the time to inventory and fully understand how it could leverage state and local approaches. Consequently, the department faces the risk that effective information sharing is not occurring and that its HSIN application may be duplicating existing state and local capabilities. This also raises the issue of whether similar coordination and duplication issues exist with the other federal homeland security networks and associated systems and applications under the department’s purview. DHS recognizes these risks and has improvements planned and under way to address them, including stated plans to implement our recommendations. These are positive steps and should help address shortfalls in the department’s coordination practices on HSIN. However, these actions have either just begun or are planned, with milestones for implementation yet to be defined. Until all the key coordination and collaboration practices are fully implemented and institutionalized, DHS will continue to be at risk that the effectiveness of its information sharing is not where it needs to be to adequately protect the homeland and that its efforts are unnecessarily duplicating state and local initiatives. Madame Chair, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact David Powner, Director, Information Technology Management Issues, at (202) 512-9286 or pownerd@gao.gov. Other individuals who made key contributions include Gary Mountjoy, Assistant Director; Barbara Collier; Joseph Cruz; Matthew Grote; and Lori Martinez. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) is responsible for coordinating the federal government's homeland security communications with all levels of government, the private sector, and the public. In support of its mission, the department has deployed a Web-based information-sharing application--the Homeland Security Information Network (HSIN)--and operates at least 11 homeland security networks. The department reported that in fiscal years 2005 and 2006, these investments cost $611.8 million to develop, operate, and maintain. In view of the significance of information sharing for protecting homeland security, GAO was asked to testify on the department's efforts to coordinate its development and use of HSIN with two key state and local initiatives under the Regional Information Sharing Systems--a nationwide information-sharing program operated and managed by state and local officials. This testimony is based on a recent GAO report that addresses, among other things, DHS's homeland security networks and HSIN. In performing the work for that report, GAO analyzed documentation on HSIN and state and local initiatives, compared it against the requirements of the Homeland Security Act and federal guidance and best practices, and interviewed DHS officials and state and local officials. In developing HSIN, its key homeland security information-sharing application, DHS did not work effectively with two key Regional Information Sharing Systems program initiatives. This program, which is operated and managed by state and local officials nationwide, provides services to law enforcement, emergency responders, and other public safety officials. However, DHS did not coordinate with the program to fully develop joint strategies and policies, procedures, and other means to operate across agency boundaries, which are key practices for effective coordination and collaboration and a means to enhance information sharing and avoid duplication of effort. For example, DHS did not engage the program in ongoing dialogue to determine how resources could be leveraged to meet mutual needs. A major factor contributing to this limited coordination was that the department rushed to deploy HSIN after the events of September 11, 2001. In its haste, it did not develop a comprehensive inventory of key state and local information-sharing initiatives, and it did not achieve a full understanding of the relevance of the Regional Information Sharing Systems program to homeland security information sharing. As a result, DHS faces the risk that effective information sharing is not occurring and that HSIN may be duplicating state and local capabilities. Specifically, both HSIN and one of the Regional Information Sharing Systems initiatives target similar user groups, such as emergency management agencies, and all have similar features, such as electronic bulletin boards, "chat" tools, and document libraries. The department has efforts planned and under way to improve coordination and collaboration, including developing an integration strategy to allow other applications and networks to connect with HSIN, so that organizations can continue to use their preferred information-sharing applications and networks. In addition, it has agreed to implement recommendations made by GAO to take specific steps to (1) improve coordination, including developing a comprehensive inventory of state and local initiatives, and (2) ensure that similar coordination and duplication issues do not arise with other federal homeland security networks, systems, and applications. Until DHS completes these efforts, including developing an inventory of key state and local initiatives and fully implementing and institutionalizing key practices for effective coordination and collaboration, the department will continue to be at risk that information is not being effectively shared and that the department is duplicating state and local capabilities.
The term “STEM education” refers to teaching and learning in the fields of science, technology, engineering, and mathematics. It includes educational activities across all grade levels—from pre-school to post- doctorate—in both formal (e.g., classrooms) and informal (e.g., afterschool programs) settings. In 2012, we reviewed the delivery and effectiveness of federal STEM education programs. As in our 2012 report, for this report we define a federally-funded STEM education program as a program funded in a designated fiscal year by allocation or congressional appropriation that includes one or more of the following as a primary objective: attract or prepare students to pursue classes or coursework in STEM areas through formal or informal education activities, attract students to pursue degrees (2-year, 4-year, graduate, or doctoral) in STEM fields through formal or informal education activities, provide training opportunities for undergraduate or graduate students in STEM fields (this can include grants, fellowships, internships, and traineeships that are targeted to students; we do not consider general research grants to researchers that may hire a student to work in the lab to be a STEM education program), attract graduates to pursue careers in STEM fields, improve teacher education in STEM areas for teachers and those studying to be teachers, improve or expand the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields, or conduct research to enhance the quality of STEM education programs provided to students. There is no commonly used definition of fields that are considered STEM. For this report, we use a comprehensive definition of STEM that includes three STEM categories: Core STEM, Healthcare STEM, and Other STEM (see fig. 1). We present our findings for the three categories combined and for each of three STEM categories. See our description of the relevant data sets in appendix I for an explanation of how we classified fields of study and occupations into these STEM categories in our data analysis. The Committee on STEM Education is the interagency coordination body for STEM education in the federal government (see fig. 2). Federal STEM education programs have been created in two ways— directly by law or through agencies’ broad statutory authority to carry out their missions. In our 2012 STEM report, we reported that in fiscal year 2010, 13 federal agencies administered 209 programs to increase knowledge of STEM fields and attainment of STEM degrees. These agencies, listed below in Table 1, continued to administer federal STEM education programs in fiscal year 2014. In our 2012 report, we found that in fiscal year 2010, 83 percent of the programs we identified overlapped to some degree with at least 1 other program by offering similar services to similar target groups in similar STEM fields to achieve similar objectives. Although those programs may not be duplicative, we reported that they were similar enough that they needed to be well coordinated and guided by a robust strategic plan. We also found that federal agencies’ limited use of performance measures and evaluations may have hampered their ability to assess the effectiveness of individual programs as well as the overall federal STEM education effort. We recommended that as the Office of Science and Technology Policy leads the government’s STEM education strategic planning effort, it should work with agencies to better align their activities with a government-wide strategy, develop a plan for sustained coordination, identify programs for potential consolidation or elimination, and assist agencies in determining how to better evaluate their programs. The Office of Science and Technology Policy has taken steps to address some of our recommendations. Regarding our recommendation on potential elimination or consolidation of programs, the Committee on STEM Education released its interim strategic planning progress report in February 2012, which noted that STEM education programs had been identified to be potentially overlapping and encouraged agencies to streamline programs where appropriate. In addition, the President’s fiscal year 2014 budget called for a major restructuring of federal STEM education programs through the consolidation of programs and the realignment of STEM education activities. Since our prior report on STEM, the number of STEM education programs dropped from 209 in 2010 to158 in 2013. The President’s fiscal year 2015 budget request seeks to continue these efforts and states that agencies should focus on internal consolidations and eliminations while funding their most effective programs. Regarding our recommendation on evaluations, in May of 2013 the Committee on STEM Education released its 5-year Strategic Plan, which included guidance to agencies in developing evaluations for STEM education programs. The plan also laid out five broad priority areas: Enhance STEM experiences of undergraduate students; Improve STEM instruction; Increase and sustain youth and public engagement in STEM; Better serve groups historically under-represented in STEM fields; Design graduate education for tomorrow’s STEM workforce. In addition, in July 2013, a joint Office of Science and Technology Policy/ Office of Management and Budget memo included guidance to agencies on how to align their programs and budget submissions—beginning with the budget submission for 2015—with the goals of the STEM Education 5-Year Strategic Plan. The guidance includes language directing the agencies to prioritize programs that use evidence to guide program design and implementation and to define appropriate metrics and improve the measurement of outcomes. Furthermore, in the President’s 2015 budget submission, the administration stated that improving STEM education by implementing the 5-year Strategic Plan is a cross-agency priority goal. As a result of this designation, the Office of Management and Budget must review on a quarterly basis agencies’ progress in meeting this goal. Overall, postsecondary degrees awarded in STEM fields have increased at a greater rate than in non-STEM fields during the past decade. The number of STEM degrees awarded increased 55 percent, from 1.35 million degrees awarded in the 2002-2003 academic year to over 2 million in the 2011-2012 academic year. In comparison, degrees awarded in non-STEM fields increased 37 percent in the same time period (see fig. 3). STEM degrees now comprise a larger share of total postsecondary degrees awarded—42 percent in the 2011-2012 academic year, up from 39 percent in the 2002-2003 academic year. However, much of the increase in STEM degrees came from growth in awards of Healthcare degrees, which have doubled over the past decade (see fig. 4). Degrees awarded in Core STEM fields increased at a substantially lower rate (19 percent) than non-STEM fields (37 percent). Degrees awarded in Other STEM fields increased at a greater rate (43 percent) than non-STEM fields. The comparatively slower growth in Core STEM fields is due in large part to an 18 percent decline in the number of computer science and information technology (IT) degrees awarded in the past decade. Computer science and IT degrees decreased each year between the 2002-2003 and 2007-2008 academic years but then increased (see fig. 5). A research association that has examined trends in computer science bachelor’s degrees attributes the decline to the “dot-com crash.” Aside from degrees awarded in the computer science/IT field, degrees awarded in all of the other STEM fields have increased throughout the past decade. Among the Core STEM fields, degrees awarded in the physical sciences, life sciences, and mathematics have grown at a greater rate than non-STEM fields (see fig. 6). Degrees awarded in engineering have also increased, though at a slightly lower rate than non- STEM fields (37 percent compared to 39 percent). Overall, employment trends have generally been more favorable in STEM occupations than in non-STEM occupations. The number of jobs in STEM occupations increased 16 percent from 14.2 million jobs in 2004 to 16.5 million in 2012, while jobs in non-STEM occupations remained fairly steady (with a decline of 0.1 percent). STEM occupations also had more wage growth on average and lower unemployment rates than non-STEM occupations (see table 2). However, employment conditions vary across STEM fields, with healthcare occupations generally having the most favorable conditions. (See also appendix III for more detailed information on recent trends in STEM and non-STEM occupations). After controlling for education levels, demographic characteristics, and type of job, we estimate that the unemployment rate among workers in STEM occupations overall was 1.2 percentage points lower than for similar workers in non-STEM occupations in 2012, and the average wage in STEM occupations was 17 percent higher (see table 3). Healthcare occupations had the largest differences, while workers in Other STEM occupations had unemployment rates and average wages that were similar to those in non-STEM occupations. While employment conditions have generally been more favorable in STEM occupations than in non-STEM occupations, conditions vary across specific STEM fields. Most STEM fields experienced both increases in employment levels and in average wages from 2004 to 2012, as well as relatively low unemployment rates when compared to non- STEM occupations. However, three fields— STEM sales occupations, engineering technician and drafting occupations, and science technician occupations—experienced either a decline in the number of jobs in this time period or a decline in the average wage (see fig. 7). Engineering technician and drafting occupations and science technician occupations are also among the STEM fields with the highest unemployment rates in recent years, though their unemployment rates have fallen since 2010 and were lower than non-STEM occupations in 2012 (see fig. 8). It is difficult to know whether the United States is producing enough STEM workers to meet employer needs for several reasons. First, estimating how many STEM workers employers need is a challenge, in part because demand for STEM workers can fluctuate with economic conditions. For example, the number of jobs in core STEM occupations declined by about 250,000 between 2008 and 2010 (from 7.74 million jobs in 2008 to 7.49 million in 2010), though it then increased (to 7.89 million jobs in 2012). Subject matter specialists and federal officials we interviewed also noted that employer needs in STEM fields are difficult to predict because they may change with technological or market developments. Furthermore, the supply of STEM workers in the United States may not match the demand at any given point in time because of the time it takes to educate a STEM worker. Research suggests that students’ decisions about which fields to study may be influenced by the economic conditions and future career prospects they perceive in those fields. Favorable economic conditions in a STEM field may encourage students to pursue degrees in that field. However, it may take them several years to complete their degrees, so changes in the domestic supply of STEM workers may lag behind changes in the domestic demand. In addition, the number of students graduating with STEM degrees may not be a good measure of the supply of STEM workers because students often pursue careers in fields different from the ones they studied. Figure 9 shows the educational background of workers in selected STEM occupations in 2012 up to the bachelor’s level. With the exception of engineering, most of those in STEM occupations did not receive a bachelor’s degree in the same field in which they were working. They either majored in a different STEM field or a non-STEM field in their undergraduate education, or they did not receive a bachelor’s degree. As a result, it is difficult to estimate the supply of workers in a STEM occupation from information on the number of bachelor’s degrees awarded in a STEM field. Further evidence of the difficulty in estimating the size of the STEM workforce from information on the number of STEM degrees is the substantial portion of workers with STEM bachelor’s degrees who work in non-STEM occupations— 62 percent in 2012 (see fig. 10). The survey data cannot tell us how many of these STEM-educated workers are in a non-STEM occupation by choice and how many would prefer to work in a STEM occupation but cannot find a position suitable to them. However, these workers have had relatively low unemployment rates in recent years— 4.8 percent in 2012, suggesting that there is generally demand in the workplace for workers with STEM education, both in STEM and non- STEM occupations (see appendix III for further information on the educational backgrounds of workers in STEM and non-STEM occupations). Eighty-eight percent of the 124 federal postsecondary STEM education programs that responded to our survey indicated that meeting one or more of the workforce needs we identified, such as promoting a diverse workforce, was a stated objective of the program. An additional 11 percent of postsecondary programs indicated that meeting at least one workforce need was a potential benefit of their program activities, even if it was not a stated objective. The most common stated objective was to prepare postsecondary students for a career in a STEM field. See figure 11 for fiscal year 2012 obligations associated with the various workforce needs. Eighty percent of the 124 federal postsecondary STEM programs that responded to our survey said that they focused on specific STEM occupations—41 percent as a stated objective and an additional 39 percent as a potential benefit of the program. Almost three-quarters of obligations by grant-making programs with a stated objective to increase the numbers of workers in specific STEM occupations were made by programs that said they gave preference to applicants with the same goal. Programs generally reported that they chose occupations according to market demand, their agency’s mission, or both. Fifty-six percent of the programs (25 percent of obligations) that focused on specific fields said that they chose occupations based on market demand. Most of these programs reported that they identified high- demand occupations using national data and their own formal and informal research, such as networking with local industries. (See figure 12.) Some programs also indicated that they obtained information about high-demand occupations through partnerships with other organizations, such as industry groups that conduct national workforce needs assessments. Along with high-demand occupations, most of the STEM education programs (85 percent of programs, 65 percent of obligations) that focused on specific fields reported that they chose occupations related to the agency’s mission. For example, the Department of Energy’s mission corresponds to some specific STEM fields, such as energy science and nuclear physics, and the majority of programs from this agency said that they focus on mission-related occupations. Furthermore, one-third of the programs that target specific fields told us they focus solely on occupations related to their agency’s mission instead of on high-demand occupations. One of the 13 programs we studied in depth—the National Institutes of Health’s Ruth L. Kirschstein National Research Service Awards for Individual Predoctoral Fellows program—aims to address needs for biomedical, behavioral, and clinical research in the country. For this reason, grant guidance states that applicants must propose projects in research areas that fall under the agency’s scientific mission. Additionally, 60 percent of postsecondary STEM education programs, representing 59 percent of obligations, said that they prepared students for jobs at their own agencies. While this may meet some workforce needs, the agency creates its own closed loop of trainees, job openings, and employees, and does not necessarily try to provide STEM workers to the broader workforce. In addition to preparing students for STEM jobs, we identified several other workforce needs that federal STEM education programs reported addressing. For example, experts and agency officials told us that programs that increase the diversity of the STEM workforce, prepare students for innovation and emerging fields, or provide STEM skills to students who do not obtain STEM degrees can contribute to American competitiveness in other ways. Experts also said that federal STEM programs are uniquely positioned to meet some of these broader workforce needs, which may not be provided by the marketplace alone. A majority of the postsecondary STEM education programs in our survey indicated that they focus on increasing the numbers of minority, disadvantaged, or under-represented groups in the STEM workforce: 38 percent (45 percent of obligations) as a stated program objective, and 54 percent (48 percent of obligations) as a potential benefit of the program. Programs with a stated objective to increase the diversity of the STEM workforce most frequently reported that they served one or more under- represented racial or ethnic groups and people from economically disadvantaged backgrounds, and least frequently reported serving women. Additionally, 77 percent of obligations by grant-making programs that responded to our survey were made by programs that reported that they gave preference to grant applicants that intend to increase the number of STEM workers from minority, disadvantaged, or under-represented groups. Four of the thirteen programs we studied in depth reported that they were primarily intended to serve minority, disadvantaged, or under-represented groups in STEM fields. For example, the Department of Education’s Hispanic-Serving Institutions STEM and Articulation Programs award grants to postsecondary institutions with undergraduate student bodies that are at least 25 percent Hispanic. Grantees may create new coursework, improve infrastructure, develop research opportunities for students, or provide outreach and support services to students in order to encourage their pursuit of STEM degrees. Additionally, the National Science Foundation’s Louis Stokes Alliances for Minority Participation program seeks to increase the numbers and qualifications of STEM graduates from under-represented groups. Grantees are allowed wide latitude to design projects that improve the undergraduate educational experiences of students and facilitate their transfer from 2-year to 4-year postsecondary institutions. Innovation is another workforce need that most federal postsecondary STEM programs reported that they aim to meet. In fact, among postsecondary STEM programs responding to our survey, preparing students or workers for innovation in their field and for careers in emerging STEM fields were the workforce needs with the highest reported obligations. However, although 95 percent of the 124 STEM programs that responded to our survey (97 percent of obligations) indicated that they intended to prepare people for innovation in their fields or for emerging STEM fields, 59 percent (61 percent of obligations) considered innovation to be a potential benefit rather than a stated objective. For example, the National Science Foundation and the National Institutes of Health both consider innovation in their agency-wide grant- making guidance. Additionally, the National Science Foundation sometimes creates agency-wide priorities for funding certain emerging fields, such as clean energy. Federal postsecondary STEM education programs that responded to our survey indicated that they provided a range of services. The most common services they reported included research opportunities, internships, and mentorships. (See fig. 13.) Eighty percent of the 124 postsecondary STEM education programs that responded to our survey, representing 88 percent of obligations, said they tracked their success at meeting workforce needs using at least one outcome-based measure. Degree attainment, number of students pursuing STEM coursework, number of students taking a STEM job, and participant satisfaction were the most commonly reported outcomes. For example, the National Institutes of Health produced a report focused on the workforce outcomes of biomedical students, the majority of whom receive support from the National Institutes of Health at some point in their graduate careers. However, some programs did not measure an outcome or output that directly related to their stated objectives. For example, of the 78 postsecondary programs with a stated program objective to prepare students for STEM careers, 53 percent (45 percent of obligations) reported that they did not track the number of their students who took a job in a STEM field. Similarly, of the 49 programs with a stated program objective to increase the numbers of STEM graduates, 39 percent (43 percent of obligations) reported that they did not measure the educational attainment of their program participants. These data are consistent with our 2012 STEM report, in which we found that STEM education programs’ outcome measures were not clearly reflected in the performance planning documents of most agencies. As we recommended in 2012, the National Science and Technology Council recently issued guidance to help agencies better incorporate their STEM education efforts and the goals from the government-wide STEM Education 5-Year Strategic Plan into their agency performance plans and reports. As agencies follow the guidance, improve their outcome measures, and focus on the effectiveness of the programs, more programs may measure outcomes directly related to their stated program objectives, such as preparing students for STEM careers. According to our survey, preparing students for postsecondary education in a STEM field is either a stated program objective or a potential secondary benefit of almost all federal K-12 STEM education programs. Specifically, out of 30 federal K-12 STEM education program respondents to our survey, 13 programs (50 percent of K-12 program obligations) reported that preparing students for postsecondary STEM education is a stated program objective, while 16 programs reported that it is a potential benefit of the program. Of the six federal K-12 STEM education programs we selected to review in more detail, four programs—Advanced Technological Education, Discovery Research K-12, Math and Science Partnership, and Upward Bound Math-Science—reported that preparing students for postsecondary STEM education is a stated objective of the program. Upward Bound Math-Science programs, which are based in institutions of higher education, work closely with students to strengthen their math and science skills in order to prepare and encourage them to pursue postsecondary degrees in math and science. According to an official from an Upward Bound Math-Science program we visited in California, the program is not specifically intended to prepare students for the STEM workforce, but it emphasizes helping students understand the varied career opportunities available to them in math and science fields. Officials from another Upward Bound Math-Science program we visited said they try to connect their students with practitioners in the field, since it is important for students to have role models in STEM occupations who hail from similar backgrounds. In our survey, 18 of the 30 federal K-12 STEM education programs (approximately 77 percent of K-12 program obligations) reported that improving the ability of K-12 teachers to teach STEM content is a stated program objective. Several experts have noted that one challenge at the K-12 level is that STEM teachers sometimes do not have sufficient content knowledge to effectively teach these subjects, and that the federal government can play an important role by supporting professional development for STEM teachers and encouraging more college graduates in STEM fields to pursue teaching careers. Four of the federal K-12 STEM education programs we reviewed in detail— Advanced Technological Education, Discovery Research K-12, Math and Science Partnership, and the Mathematics and Science Partnerships program—reported that improving the ability of K-12 teachers to teach STEM content is a stated program objective. The Mathematics and Science Partnerships program provides formula grants to states, which in turn award competitive grants to partnerships that enhance the content knowledge and teaching skills of math and science teachers. A Mathematics and Science Partnerships grantee we met with in Texas established regional networks across the state in which mentor teachers provided professional development and mentoring to participating teachers. Similarly, the Discovery Research K-12 program supports research projects that address a need in STEM education at the pre-kindergarten through 12th grade levels, particularly programs that explore unconventional approaches to teaching and learning. Researchers we met with were exploring how computational models could be used to make decisions about resource allocation to optimize learning in STEM classes. For example, the model might be used to calculate optimal student-teacher ratios given other factors, such as grade level, subject, and class composition. In our survey, 7 of the 30 federal K-12 STEM education programs (approximately 26 percent of K-12 program obligations) reported that providing students with STEM knowledge, skills, and abilities, without the explicit goal of preparing them for postsecondary STEM education or a STEM career, is a stated program objective. According to recent research, exposing students to STEM content and encouraging their interest in STEM disciplines at an early age is important in order to increase the likelihood that they remain engaged with STEM later in life. The National Science Foundation’s Advancing Informal STEM Learning program provides grants to organizations working on innovative projects intended to expose students to STEM content outside the classroom. A museum we visited in California received an Advancing Informal STEM Learning grant to develop an outdoor bilingual science exhibit and related curriculum targeted towards Latino students in the San Francisco area. Officials told us the exhibit is geared towards students who may not generally visit the museum. Federal K-12 STEM education programs provide a variety of educational services in order to achieve their objectives. The services identified most often in our survey included classroom instruction; curriculum development; outreach to generate student interest in STEM fields; short- term experiential learning activities; and teacher professional development or retention activities (see fig. 14). In our survey, 25 of the 30 federal K-12 STEM education programs (approximately 89 percent of K-12 program obligations) reported that they tracked or monitored program outcome measures in 2012. However, as with the federal postsecondary STEM education programs some K-12 programs are not measuring outcomes directly related to their stated objectives. For example, of the 13 K-12 programs that reported having a stated program objective to prepare students for postsecondary STEM education, 10 programs said they did not track student educational attainment or the number of students who pursued coursework in STEM fields. Of the 18 programs that reported that improving the ability of K-12 teachers to teach STEM content was a stated program objective, 6 programs said they did not monitor teacher improvement and performance in STEM education instruction or the number of qualified teachers teaching STEM. K-12 STEM education program grantees we met with monitored some programmatic outcomes. For example, an official from an Upward Bound Math-Science program we visited told us that each program is required to submit an annual report to Education, including data on performance outcomes such as the number of participants who graduate from high school, pursue postsecondary degrees in STEM fields, and graduate from college within 6 years. The official said that all but one of the participants in the program’s first cohort graduated from high school and enrolled in college. Further, officials from a Mathematics and Science Partnerships grantee told us that—in addition to mandatory reporting to Education on performance outcomes, such as the number of teachers trained through the program and the extent to which teachers’ test scores showed statistically significant gains—they were implementing an initiative to correlate programmatic data with student outcomes across the state, as measured by teacher self-reporting and statewide assessments. The initial phase of the analysis, based on teacher self-reporting, found that the students whose teachers had participated in the program outperformed their peers in several STEM subjects. In addition, officials from the museum exhibit in California funded by the Advancing Informal STEM Learning program said assessments were planned for every stage of the project, including a summative evaluation to review the extent to which it may have influenced Latino youth awareness of and engagement with STEM content. Officials said the evaluation would be completed in January of 2015. It is difficult to determine whether there has been a shortage or a sufficient supply of STEM workers in the United States and, consequently, to define the appropriate role the federal government should play in increasing the number of STEM-educated workers. There is not a one-to-one match between STEM graduates in a specific field and corresponding STEM jobs because not all people with STEM degrees pursue careers in their fields of study, whether by choice or because of limited employment opportunities in the field. Regardless of career choices, the rigor of a STEM education may help promote a workforce with transferable skills and the potential to fuel innovation and economic growth. Federal postsecondary STEM education programs may help develop a workforce that will address issues that affect the population as a whole, such as researching diseases or improving defense capabilities. Additionally, federal K-12 STEM education programs may generate interest in STEM fields early in life, which could usher more students into the STEM pipeline and increase the likelihood that they will pursue STEM education and careers. Although the administration has taken steps to consolidate and coordinate STEM education programs, numerous programs—spread across 13 agencies—remain. As the administration continues to consolidate and eliminate STEM education programs, it risks making decisions without considering the efficacy of these programs because many federal STEM education programs are not measuring their outcomes. However, the guidance recently issued by the National Science and Technology Council could help agencies better incorporate their STEM education efforts and the goals from the government-wide 5- year STEM strategic plan into their agency performance plans and reports. This will enable agencies to better assess which STEM education efforts are successful in contributing to agency-wide performance goals and supporting the overall federal STEM effort. We provided a draft of this product for comment to the Departments of Defense, Education, Energy, and Health and Human Services; National Science Foundation; and Office of Management and Budget. All provided technical comments except the Department of Defense, which indicated that it had no comments. We incorporated the technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Department of Defense, Department of Education, Department of Energy, Department of Health and Human Services, National Science Foundation, and Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our research objectives were to review (1) recent trends in the number of degrees and jobs in Science, Technology, Engineering, and Mathematics (STEM) fields, (2) the extent to which federal postsecondary STEM education programs take workforce needs into consideration, and (3) the extent to which federal kindergarten-12th grade (K-12) STEM education programs prepare students for postsecondary STEM education. To inform all of our objectives, we reviewed relevant federal laws and regulations. We also reviewed relevant literature and past reports on STEM education, including our 2012 STEM report and the National Science and Technology Council’s Strategic Plan for federal STEM education programs. In addition, we interviewed STEM experts and officials from the Office of Science and Technology Policy and several other federal agencies that administer STEM education programs to gather information on their STEM education efforts. We attended a STEM education conference to gather additional perspectives about federal STEM education programs. To examine recent trends in the number of STEM degrees awarded, we analyzed data from IPEDS from the Department of Education’s National Center for Education Statistics. IPEDS is a system of interrelated surveys conducted annually to gather information from every college, university, and technical and vocational institution that participates in federal student financial aid programs. The Higher Education Act of 1965, as amended, requires institutions of higher education that participate in federal student aid programs to complete IPEDS surveys. IPEDS provides institution- level data in such areas as enrollment, program completions, faculty, staff, and finances. Specifically, we analyzed 10 years of data from the IPEDS program completions component, from the July 2002-June 2003 academic year to the July 2011-June 2012 academic year. The program completions component provides data on the number of degrees awarded by each institution for each program of study. We analyzed the data to determine the number of degrees awarded nationally in STEM and non- STEM programs of study in this time period, the number awarded in our three STEM categories, and the number awarded in selected STEM fields. We included degrees awarded for both first and second majors in our analysis. Our results represent the number of degrees awarded, not the number of individuals who obtained degrees. We assessed the reliability of the IPEDS data we used by reviewing relevant documents and past GAO reviews of the data and conducting electronic testing. On the basis of this assessment, we concluded that the data were sufficiently reliable for our reporting purposes. In conducting our analysis, we classified each program of study in the IPEDS data as STEM or non-STEM. We used as guidance work conducted by the Census Bureau to classify fields of study as science and engineering or science- and engineer-related in the American Community Survey (ACS) data. This helped to ensure that we were consistent with the fields we defined as STEM in both our IPEDS and ACS analyses. We further classified these STEM fields into our three STEM categories of Core STEM, Healthcare STEM, and Other STEM. See table 1 below for the fields of study we classified as STEM and how we classified them into our three STEM categories. We also aggregated detailed programs of study into broader STEM fields, generally based on the first two digits of the Classification of Instructional Programs code (the classification system that IPEDS uses to define programs of study). For example, Classification of Instructional Programs codes beginning with 11 represent programs of study under the category of “computer and information sciences and support services.” The information we present on numbers of computer science/information technology (IT) degrees comes from aggregating the number of degrees awarded for Classification of Instructional Programs codes that begin with 11. For life sciences, mathematics and statistics, and social sciences, we combined programs of study from multiple 2-digit Classification of Instructional Programs code categories (see table 4 for the fields we combined). To examine trends in STEM occupations, we analyzed the Bureau of Labor Statistics’ OES data from the May 2004 survey to the May 2012 survey. The OES program surveys establishments to produce estimates of employment and wages for specific occupations. We began our analysis with the May 2004 data because that was the first year that all occupations in the OES were classified based on the Standard Occupational Classification (SOC) system. We conducted our analysis to identify trends in the number of jobs and the average wages in STEM and non-STEM occupations from 2004 to 2012. We assessed the reliability of the OES data by reviewing relevant documents, interviewing Bureau of Labor Statistics officials, and conducting electronic testing of the data. Based on our assessment, we concluded that the OES data were sufficiently reliable for our reporting purposes. We classified occupations as STEM and non-STEM based on the SOC Policy Committee’s Options for Defining STEM (Science, Technology, Engineering, and Mathematics) Occupations Under the 2010 Standard Occupational Classification System. This document sets out several options for defining STEM occupations. Any occupation that was included in any of the SOC Policy Committee’s options was classified as STEM in our analysis. All other occupations were classified as non-STEM. We also classified occupations into our three STEM categories of Core STEM, Healthcare STEM, and Other STEM based some of the options presented by the SOC Policy Committee. Specifically: Occupations categorized by the SOC Policy Committee as “Life and Physical Science, Engineering, Mathematics, and Information Technology Occupations” were classified as Core STEM occupations in our analysis. These include postsecondary teachers, managers, technicians, and scientists in these fields, as well as sales representatives for technical and scientific products and sales engineers. Occupations categorized by the SOC Policy Committee as “Health Occupations” were classified as Healthcare STEM occupations in our analysis. These included health diagnosing and treating practitioners, health technologists and technicians, postsecondary health teachers, and medical and health services managers. It does not include healthcare support occupations (e.g., health aides, nursing assistants). Occupations categorized by the SOC Policy Committee as “Social Science Occupations and “Architecture Occupations” were classified as Other STEM occupations in our analysis. These include scientists and researchers, architects and related professions, assistants, and postsecondary teachers in these fields. The SOC Policy Committee’s Options for Defining STEM Occupations was based on occupations defined under the 2010 SOC, while the 2004 to 2009 OES data used a slightly different occupational classification system (the 2000 SOC). We used Bureau of Labor Statistics crosswalks between the 2000 SOC and the 2010 SOC to identify the appropriate STEM occupations throughout the period of our study. We also combined detailed occupations into broader occupational groups based on the first two or three digits of the SOC codes and presented employment and wage trends for these occupational groups (e.g., computer/IT occupations). Specifically, our categories of STEM management and STEM sales in figure 6 of our report combine occupations under the two digit-SOC codes 11 (management occupations) and 41 (sales and related occupations). Other occupational categories presented in figure 6 combine occupations based on the first three digits of the SOC codes (e.g., our computer/IT category combines occupations beginning with SOC code 15-1, computer occupations). To minimize respondent burden, the OES survey is conducted on a 3- year cycle that ensures that most establishments are surveyed at most once every three years. OES estimates are produced annually, but each year’s estimates are based on surveys conducted over a 3-year period. Following Bureau of Labor Statistics guidance for using OES data that are at least two or three years apart when examining trends over time, we present results for alternate years in appendix III (for May 2004, 2006, 2008, 2010, and 2012). We calculated standard errors for our estimates based on the relative standard errors that the Bureau of Labor Statistics provided for each employment and mean wage estimate for each occupation. We analyzed the data from the Census Bureau’s ACS to examine the unemployment rates of those in STEM and non-STEM occupations, as well as the educational backgrounds at the bachelor’s degree level of those in STEM and non-STEM occupations. The ACS is an ongoing national survey which replaced the decennial census long-form questionnaire as a source for social, economic, demographic, and housing information. About 3 million households are selected for the ACS each year. The ACS questionnaire asks about the kind of work people in the household were doing in their most recent job if they worked in the last 5 years (i.e., their occupation). It also asks about the highest degree or level of school a person has completed. If the person has completed a bachelor’s degree or higher, the ACS asks for the specific major(s) of any bachelor degree(s) the person has completed. The ACS also contains questions to produce estimates of the number of people who are employed, unemployed, and not in the labor force. We specifically analyzed data from the 1-year Public Use Microdata Samples for 2009 to 2012. We assessed the reliability of the data by reviewing relevant documentation and conducting electronic testing of the data. Based on our assessment, we concluded that the ACS data were sufficiently reliable for our reporting purposes. The Census Bureau has its own system for coding occupations and fields of study in the ACS data, which are based on the SOC and the CIP, respectively. Census has also classified occupations as STEM and STEM-related (healthcare and architecture) and fields of study as science and engineering and science- and engineering-related. The Census Bureau’s classifications of occupations are based on the SOC Policy Committee’s Options for Defining STEM Occupations, though agency officials made some modifications due to their use of different coding systems. We considered any occupation that Census classified as STEM or STEM-related as STEM in our analysis of occupations, and any field of study they identified as science and engineering and science- and engineering-related as STEM in our analysis of degrees. As with our analysis of OES data, we classified occupations into our three STEM categories of Core STEM, Healthcare STEM, and Other STEM. We also combined detailed occupations and fields of study into broader categories. For example, we combined 11 specific occupations into our category of computer/IT occupations, and 6 different fields of study for the computer/IT major at the bachelor’s degree level. With regard to the unemployment rates we present, most of our estimates are for the civilian population in the labor force ages 16 and older. Our estimates of the educational background of those in STEM and non- STEM occupations are based on the population ages 22 and older. Our estimates of the unemployment rates of those in STEM and non-STEM occupations by educational background (in figure 6 of appendix III) are for the civilian population in the labor force ages 22 and older. The Bureau of Labor Statistics has found that ACS estimates of the unemployment rate can differ from estimates produced by the Current Population Survey, a monthly survey of about 60,000 households that is the nation’s source of official government statistics on employment and unemployment. The Bureau of Labor Statistics states that a number of factors may account for the differences, including overall questionnaire differences, differing requirements in the two surveys with regard to whether an individual is actively looking for work, and differences in reference periods, modes of collection, and population controls. We calculated standard errors for our estimates using the replicate weight method. For some estimates of the unemployment rate for specific occupational categories, the margin of error exceeded 30 percent of the estimates. We note these instances in our report. In order to compare the wages and unemployment rates of workers in STEM and non-STEM occupations with comparable personal characteristics, we ran a series of wage regressions and unemployment regressions in which we controlled for human capital characteristics (age and education) and demographic characteristics (race, ethnicity, gender, citizenship, and veterans status) as well as the worker’s broad occupational category. We used the ACS for our wage and unemployment regression analyses. We restricted our analysis to full-time, full-year workers. We restricted our analysis to full-time workers because the ACS does not collect data on whether people are salaried or hourly workers, making it difficult to use the “usual weekly hours” variable. We restricted our analysis to full-year workers because the ACS also does not collect data on weekly wages, but on earnings from wages or salary in the past year. Not all people work a full year, and people who have been unemployed for part of the year will have annual earnings that do not reflect their annual salary or hourly rate of pay. When constructing our dependent variable, we took the natural log of annual wages. For the unemployment regressions, the outcome variable is current labor force status. People who are currently unemployed are defined as unemployed; people who are currently working or on paid leave from work are defined as not unemployed; and people who are not in the labor force are excluded from the universe. The universe is also restricted to people ages 16-64, and excludes people who have no work experience or have not worked in the past 5 years because the ACS does not collect occupation for these people. Both sets of regressions use linear models and the same set of covariates. For the purposes of our study, we applied the definition of a federally- funded STEM education program used in previous GAO work. Specifically, we defined it as a program funded in fiscal year 2012 by allocation or congressional appropriation that had not been subsequently terminated and included one or more of the following as a primary objective: attract or prepare students to pursue classes or coursework in STEM areas through formal or informal education activities (informal education programs provide support for activities provided by a variety of organizations that offer students learning opportunities outside of formal schooling through contests, science fairs, summer programs, and other means; outreach programs targeted to the general public were not included), attract students to pursue degrees (2-year, 4-year, graduate, or doctoral degrees) in STEM fields through formal or informal education activities, provide training opportunities for undergraduate or graduate students in STEM fields (this could include grants, fellowships, internships, and traineeships that are targeted to students; general research grants that are targeted to researchers that may hire a student to work in the lab were not considered a STEM education program), attract graduates to pursue careers in STEM fields, improve teacher (pre-service or in-service) education in STEM areas, improve or expand the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields, or conduct research to enhance the quality of STEM education programs provided to students. In addition, STEM education programs may provide grants, fellowships, internships, and traineeships. While programs designed to retain current employees in STEM fields were not included, programs that fund retraining of workers to pursue a degree in a STEM field were included because these programs help increase the number of students and professionals in STEM fields by helping retrain non-STEM workers to work in STEM fields. For the purposes of this study, we defined an organized set of activities as a single program even when its funds were allocated to other programs as well. Several programs had been eliminated or consolidated into new programs since our last inventory. We included programs that had been consolidated, but we did not include programs that had since been terminated. For a list of STEM education programs by agency, including consolidated programs, see appendix IV. To identify federally-funded STEM education programs, first we developed a combined list of programs based on the findings of two previous STEM education inventories—one that we issued in 2012 and another completed by the National Science and Technology Council in 2011. Second, we shared our list with agency officials, along with our definition of STEM education program, and asked officials to make an initial determination about which programs should remain on the list and which programs should be added to the list. If agency officials indicated they wanted to remove a program from our list, we asked for additional information. For example, programs on our initial list may have been terminated or consolidated, or did not receive federal funds in fiscal year 2012. We reviewed additional information on the programs that were not included in our 2012 inventory of STEM education programs, mainly through agency websites, program materials, or discussions with program officials. On the basis of this additional information, we excluded programs that we found did not meet our definition of a STEM education program. We also included screening questions in the survey to provide additional verification that the programs met our definition of a STEM education program. Of the 170 programs on our original survey distribution list, seven programs did not pass our screening questions because they had been eliminated since 2012, and we determined that another five did not meet our definition of a STEM education program. In total, we identified 158 federal STEM education programs. To provide more details about some of the STEM education programs with the highest reported obligations, we conducted a more in-depth review of 13 of the largest STEM education programs from three agencies: the National Science Foundation, the Department of Education, and the National Institutes of Health at the Department of Health and Human Services. Seven of the selected programs served postsecondary students or institutions and six programs served K-12 students or teachers (see table 5). We reviewed documentation from each program, interviewed agency officials, and conducted site visits with grantees in Austin and San Francisco and phone interviews with grantees in Boston. We chose these sites based on geographic diversity and the prevalence of federal STEM grantees. We developed a web-based survey to collect information on federal STEM education programs. The survey included questions on program objectives, occupations targeted, methods used to identify targeted occupations, and factors considered when selecting grantees. We created a list of possible workforce needs using input from experts, program officials, and grantees, and asked federal STEM education programs to indicate whether each possible workforce need was a stated program objective, a potential benefit of the program, or neither. The survey also asked programs to update information provided in our survey for the 2012 report on target groups served, services provided, outcome measures, and obligations. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with six different programs in August and September 2013. To ensure that we obtained a variety of perspectives on our survey, we selected programs from six different agencies that differed in program scope, objectives, services provided, and target groups served. An independent GAO reviewer also reviewed a draft of the survey prior to its administration. On the basis of feedback from these pretests and independent review, we revised the survey in order to improve its clarity. After completing the pretests, we administered the survey. On October 29 or November 13, 2013, we sent an e-mail message to the officials responsible for the 158 programs selected for our review that informed them that the survey was available online. In that e-mail message, we also provided them with unique passwords and usernames. We made telephone calls to officials and sent them follow-up e-mail messages, as necessary, to clarify their responses or obtain additional information. We received completed surveys from 154 programs, for a 97 percent response rate. We collected survey responses through February 14, 2014. Of the 154 federal STEM education programs that responded to our survey, 124 programs in 13 agencies primarily served students and teachers at the postsecondary level. According to our survey, these programs’ reported fiscal year 2012 obligations ranged from zero to $348 million and totaled $1.9 billion. We identified 30 programs in 10 agencies that primarily serve students and teachers at the K-12 level. According to our survey, these programs reported obligations totaling approximately $685 million in fiscal year 2012 in amounts ranging from $1,200 to $148 million. We used standard descriptive statistics to analyze survey responses. The STEM education programs in our survey received widely varying amounts of federal funding. This introduced the possibility that a few very large programs—accounting for the majority of obligations—could pursue one activity, while many small programs—accounting for the majority of programs but a small proportion of obligations—could pursue another activity. To accurately capture the survey data, we analyzed it both in terms of the percentage of programs answering each question and the corresponding percentage of obligations. In cases where these proportions differed, we presented both. Amounts obligated for each program for fiscal year 2012 were reported to us by agency officials in response to our survey. We did not independently verify this information. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors—commonly referred to as nonsampling errors—and to enhance data quality, we employed recognized survey design practices in the development of the survey and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the survey with federal officials to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse, another source of nonsampling error, we sent out e-mail reminder messages to encourage officials to complete the survey. To assess the reliability of data provided in our survey, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with agency officials when necessary to clarify their responses. While we did not verify all responses, on the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data used in this report were of sufficient quality for our purposes. The figures below show demographic information for students who received STEM degrees in the 2011-2012 academic year. Overall, degrees awarded to non-resident alien students—students in the United States on temporary visas—comprised 5 percent of all STEM degrees and 4 percent of all non-STEM degrees awarded in the 2011- 2012 academic year (see fig. 15). However, degrees awarded to non- resident alien students represented a larger share of Core STEM degrees (11 percent) and a smaller share of Healthcare degrees (1 percent). Non-resident alien students were particularly concentrated at the graduate degree levels in Core STEM fields, receiving 36 percent of master’s degrees awarded and 42 percent of doctorate or professional degrees in Core STEM fields in the 2011-2012 academic year (see fig. 16). Table 6 lists the STEM fields of study and degree levels in which non- resident alien students comprised more than 30 percent of the degrees awarded. Overall, most (63 percent) of the STEM degrees awarded in the 2011- 2012 academic year were awarded to women. However, as figure 17 shows, while women received the large majority (82 percent) of Healthcare STEM degrees that year, men received the majority of Core STEM degrees (68 percent). Among the Core STEM fields, men received the majority of degrees in computer science/information technology, engineering, technician, mathematics, and physical science fields. Women received the majority of life sciences degrees (see fig. 18). Among U.S. citizens and resident aliens, Asians and Pacific Islanders received a larger share of STEM degrees (7.1 percent), compared to their share of the non-STEM degrees (4.8 percent) (see fig. 19). Other groups’ share of STEM degrees was about the same as or less than their share of non-STEM degrees. Examining the data by STEM categories, however, African-Americans received a larger share of Healthcare degrees (15.1 percent), compared to their share of non-STEM degrees (12.6 percent). Overall, STEM degrees awarded to Latino/Hispanic students increased more than other groups from the 2002-2003 to 2010-2011 academic years. STEM degrees have also increased at a higher rate among Asians and African-Americans, compared to whites. The increase among African-Americans was primarily in Healthcare and Other STEM fields (see fig. 20). This appendix provides more detailed information about recent trends in STEM and non-STEM occupations. Thirty-eight percent of people with STEM bachelor’s degree were working in STEM occupations in 2012, and the majority worked in non-STEM occupations. Figure 25 shows that much smaller percentages of workers with non-STEM bachelor’s degrees or without a bachelor’s degree worked in STEM occupations. However, they represented about half of workers in STEM occupations. Figure 26 shows the unemployment rates for the groups of workers shown in figure 25. Figure 27 shows some non-STEM occupations with sizable populations of workers with STEM bachelor’s degrees. Aerospace Research and Career Development (ARCD) Program Minority University Research and Education Project (MUREP) Advanced Technological Education (ATE) Alliances for Graduate Education and the Professoriate (AGEP) Discovery Research K-12 (DR-K12) East Asia & Pacific Summer Institutes for U.S. Graduate Students (EAPSI) Ethics Education in Science & Engineering (EESE) CyberCorps(R): Scholarship for Service (SFS) Graduate Research Fellowship (GRF) Program Historically Black Colleges and Universities Undergraduate Program (HBCU- UP) Integrative Graduate Education and Research Traineeship (IGERT) Program International Research Experiences for Students (IRES) Louis Stokes Alliances for Minority Participation (LSAMP) Math and Science Partnership Program (MSP) Nanotechnology Undergraduate Education in Engineering Research Experiences for Teachers (RET) in Engineering and Computer Science Research Experiences for Undergraduates (REU) Research on Education and Learning (REAL) Robert Noyce Scholarship (Noyce) Program Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP) Transforming Undergrad Education in STEM (TUES) Tribal Colleges and Universities Program (TCUP) National Defense Science and Engineering Graduate (NDSEG) Fellowship Army Educational Outreach Program (AEOP) American Chemical Society Summer School in Nuclear and Radiochemistry ASCR-ORNL Research Alliance in Math and Science Diversity in Science and Technology Advances National Clear Energy (DISTANCE)-Solar HBCU Mathematics, Science and Technology, Engineering and Research Workforce Development Program Minority Educational Institution Student Partnership Program (MEISPP) National Undergraduate Fellowship Program in Plasma Physics and Fusion Energy Sciences Pan American Advanced Studies Institute Summer Applied Geophysical Experience (SAGE) Bridges to the Baccalaureate Program Cancer Education Grants Program (R25) CCR/JHU Master of Science in Biotechnology Concentration in Molecular Targets and Drug Discovery Technologies Center for Cancer Research Cancer Research Interns Community College Summer Enrichment Program Educational Programs for Demography and Population Science, Family Planning and Contraception, and Reproductive Research Initiative for Maximizing Student Development Initiative to Maximize Research Education in Genomics National Cancer Institute Cancer Education and Career Development Program (R25) NIH Science Education Partnership Award (SEPA) NIA MSTEM: Advancing Diversity in Aging Research (ADAR) through Undergraduate Education Educational Programs for Population Research (R25) Post-baccalaureate Intramural Research Training Award Program Postbaccalaureate Research Education Program (PREP) Research Supplements to Promote Diversity in Health-Related Research RISE (Research Initiative for Scientific Enhancement) Ruth L. Kirschstein National Research Service Award Institutional Research Training Grants (T32, T35) EDMAP Component of the National Cooperative Geologic Mapping Program National Association of Geoscience Teachers (NAGT)-USGS Cooperative Summer Field Training Program Student Intern in Support of Native American Relations (SISNAR) National Center of Excellence for Aviation Operations Research (NEXTOR) Cooperative Agreements for Training Cooperative Partnerships Greater Research Opportunities Undergraduate Fellowship Program National Environmental Education and Training Partnership P3 Award: National Student Design Competition for Sustainability President’s Environmental Youth Awards Science to Achieve Results Graduate Fellowship Program $1,391,069 overall results and findings and therefore we present our overall report analysis with the original survey submission. Obligations for Strengthening Predominantly Black Institutions were not exclusive to STEM activities. STEM is one of five allowable activities, and grantees can choose to focus their projects on any of these five activities. After our survey analysis was completed and the draft report was shared with Energy, officials reported changes to the fiscal year 2012 obligations for many of their programs. The changes to Energy’s postsecondary programs summed to zero percent of total reported postsecondary obligations, and the changes to Energy’s K-12 programs summed to zero percent of total reported K- 12 obligations. We determined that these changes would not materially affect our overall results or findings. Individual changes are noted in table notes below. In response to our survey, Energy reported $455,000 in obligations for the DISTANCE-Solar program, and we used that number for the analysis throughout the report. After our analysis was completed and the draft report was shared with Energy, they reported that the actual obligations for fiscal year 2012 were $365,000. This represents a 0.00 percent decrease in total reported postsecondary STEM education program obligations. We determined that this change would not materially affect our overall results and findings and therefore we present our overall report analysis with the original survey submission. In fiscal year 2012, the DISTANCE-Solar program was called the Minority University Research Associates program. After our analysis was completed and the draft report was shared with Energy, they reported that the actual obligations for fiscal year 2012 were $6,387,000. This represents a 0.05 percent decrease in total reported postsecondary STEM education program obligations. We determined that this change would not materially affect our overall results and findings and therefore we present our overall report analysis with the original survey submission. Funds were obligated in fiscal year 2011 through fiscal year 2013 for grantees. Hence the obligation in fiscal year 2012 is $0. Melissa Emrey-Arras, (617) 788-0534 or emreyarrasm@gao.gov. The following staff members made key contributions to this report: George Scott, Director; Nagla’a El-Hodiri, Assistant Director; Divya Bali; James Bennett; Melinda Cordero; Keira Dembowski; Bill Keller; Jill Lacey; Brittni Milam; Rhiannon Patterson; Timothy Persons; Kathleen Peyman; James Rebbe; Ryan Siegel; Yunsian Tai; Kathleen Van Gelder; and Walter Vance. America COMPETES Acts: Overall Appropriations Have Increased and Have Mainly Funded Existing Federal Research Entities. GAO-13-612. Washington, D.C.: July 19, 2013. Science, Technology, Engineering, and Mathematics Education: Strategic Planning Needed to Better Manage Multiple Programs across Multiple Agencies. GAO-12-108. Washington, D.C.: Jan. 20, 2012. H-1B Visa Program: Reforms Are Needed to Minimize the Risks and Costs of Current Program. GAO-11-26. Washington, D.C.: Jan. 14, 2011. America COMPETES Act: It Is Too Early to Evaluate Programs Long- Term Effectiveness, but Agencies Could Improve Reporting of High-Risk, High-Reward Research Priorities. GAO-11-127R. Washington, D.C.: Oct. 7, 2010. Federal Education Funding: Overview of K-12 and Early Childhood Education Programs. GAO-10-51. Washington, D.C.: Jan. 27, 2010. Offshoring of Services: An Overview of the Issues. GAO-06-5. Washington, D.C.: Nov. 28, 2005. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: Oct. 12, 2005. A Glossary of Terms Used in the Federal Budget Process. GAO-05-734SP. Washington, D.C.: September 2005.
Federal STEM education programs help enhance the nation's global competitiveness by preparing students for STEM careers. Researchers disagree about whether there are enough STEM workers to meet employer demand. GAO was asked to study the extent to which STEM education programs are aligned with workforce needs. GAO examined (1) recent trends in the number of degrees and jobs in STEM fields, (2) the extent to which federal postsecondary STEM education programs take workforce needs into consideration, and (3) the extent to which federal K-12 STEM education programs prepare students for postsecondary STEM education. GAO analyzed trends in STEM degrees and jobs since 2002 using 3 data sets—the Integrated Postsecondary Education Data System, American Community Survey, and Occupational Employment Statistics—and surveyed 158 federal STEM education programs. There were 154 survey respondents (97 percent): 124 postsecondary and 30 K-12 programs. In addition, GAO conducted in-depth reviews—including interviews with federal officials and grantees—of 13 programs chosen from among those with the highest reported obligations. Both the number of science, technology, engineering, and mathematics (STEM) degrees awarded and the number of jobs in STEM fields increased in recent years. The number of degrees awarded in STEM fields grew 55 percent from 1.35 million in the 2002-2003 academic year to over 2 million in the 2011-2012 academic year, while degrees awarded in non-STEM fields increased 37 percent. Since 2004, the number of STEM jobs increased 16 percent from 14.2 million to 16.5 million jobs in 2012, and non-STEM jobs remained fairly steady. The trends in STEM degrees and jobs varied across STEM fields. It is difficult to know if the numbers of STEM graduates are aligned with workforce needs, in part because demand for STEM workers fluctuates. For example, the number of jobs in core STEM fields, including engineering and information technology, declined during the recession but has grown substantially since then. Almost all of the 124 federal postsecondary STEM education programs that responded to GAO's survey reported that they considered workforce needs in some way. For example, the most common program objective was to prepare students for STEM careers. Some of these programs focused on occupations they considered to be in demand and/or related to their agency's mission. Many postsecondary programs also aimed to increase the diversity of the STEM workforce or prepare students for innovation. Most STEM programs reported having some outcome measures in place, but GAO found that some programs did not measure an outcome directly related to their stated objectives. As GAO recommended in 2012, the National Science and Technology Council recently issued guidance to help agencies better incorporate STEM education outcomes into their performance plans and reports. As agencies follow the guidance and focus on the effectiveness of the programs, more programs may measure outcomes directly related to their objectives. Of the 30 kindergarten through 12th grade (K-12) STEM education programs responding to GAO's survey, almost all reported that they either directly or indirectly prepared students for postsecondary STEM education. For example, one program worked closely with students to provide math and science instruction and supportive services to prepare them for postsecondary STEM education, while another supported research projects intended to enhance STEM learning. GAO makes no recommendations in this report. GAO received technical comments from the Departments of Education, Energy, and Health and Human Services; National Science Foundation; and Office of Management and Budget.
Air traffic controllers monitor and direct traffic in a designated volume of airspace called a sector. Each sector requires a separate channel assignment for controllers to communicate with aircraft flying in that sector. As the amount of air traffic grows, the need for additional sectors and channel assignments also increases. FAA’s present air-ground communications system operates in a worldwide, very high frequency (VHF) band reserved for safety communications within the 118 to 137 megahertz (MHz) range. Within this range of frequencies, FAA currently has 524 channels available for air traffic services. During the past four decades, FAA has primarily been able to meet the increased need for more channel capacity within this band by periodically reducing the space between channels (a process known as channel splitting). For example, in 1966, reducing the space between channels from 100 kHz to 50 kHz doubled the number of channels. The last channel split in 1977, from 50 kHz to 25 kHz, again doubled the number of channels available. Each time FAA reduced this space, owners of aircraft needed to purchase new radios to receive the benefits of the increased number of channels. FAA can use or assign its 524 channels several times around the country (as long as channel assignments are separated geographically to preclude frequency interference). Through channel reuse, FAA can make up to 14,000 channel assignments nationwide. While aviation literature often refers to channel and channel assignments as frequency and frequency assignments, throughout this report, we use the terms channel and channel assignments. Because the growth in air traffic during the past decade has created a need for more communications channels since the 1977 split, FAA has been increasingly concerned that the demand for channels would exceed their availability, which would cause frequency congestion. FAA first explored this issue at length at a 1990 International Civil Aviation Organization (ICAO) conference, at which the ICAO member countries addressed increasing congestion in the air traffic control communications band and the especially acute problem in the U. S. and Western Europe. Over the next 5 years, ICAO evaluated different solutions that were proposed by the conference’s participants. While the Western European countries proposed further channel splitting to increase capacity, FAA proposed a totally new air-ground communications system. FAA’s proposed technology, known as VDL-3, would be based on a new integrated digital voice and data communications technology, which would assign segments of a channel to users in milliseconds of time, thereby allowing both voice and data to travel over the same channels using one of the available time slots. Under the current system, each channel is used exclusively and continuously for voice, so the air traffic controller can communicate at all times with the aircraft. This new technology could provide up to a fourfold increase in capacity without channel splitting, thus meeting the demand for new voice channels. VDL-3 digitizes a person’s voice and sends it as encoded bits of information, which is reassembled by the receiver. Moreover, this technology could provide real-time data link on-board communications of air traffic control messages and events. Although ICAO adopted FAA’s proposed digital air-ground communications system VDL-3 in 1995 as its model for worldwide implementation, it also approved standards allowing Western Europe, which was then experiencing severe frequency congestion, to further reduce the spacing between channels from 25 kHz to 8.33 kHz. While this action tripled the number of channels available for assignment, it also resulted in the need for aircraft flying in Western Europe to install new radios that are capable of communicating over the 8.33 kHz channels. ICAO intended that this reduction would be an interim measure until 2004, when FAA estimated that the technology it had proposed would be operational. However, FAA did not pursue developing VDL-3 in 1995, in part, because its existing communications system still had available capacity to meet near-term communications needs, and because the agency’s need to modernize its air traffic control system became an urgent priority. In 1998, FAA resumed developing VDL-3; however, the agency is not expected to implement this technology until 2009. Figure 1 depicts how channel splitting has increased channel capacity since 1966 and how FAA’s proposed use of VDL-3 will further increase channel capacity. FAA has identified 23 measures to improve its existing voice communications system. While FAA and the U. S. aviation industry generally believe that implementing all these measures would add several years to the useful life of the existing system, they believe it would not meet aviation’s future voice communications needs beyond 2009. Because increases in air traffic create the need for more channel assignments, the events of September 11, which have resulted in slower than expected increases, might delay by a year or two when FAA starts to encounter problems systemwide in providing new channel assignments. Agency and industry representatives agree that it is not possible to precisely predict when the existing system with its planned improvements will no longer meet aviation’s needs. As a result, FAA plans to annually assess whether this system will be capable of meeting the projected need for more channel assignments for at least 5 years into the future. FAA plans to release the first of these annual assessments in September 2002. While the focus of FAA’s efforts has been to meet aviation’s need for voice communications through 2009, FAA recognizes that its data communications needs are evolving. The agency expects to increase its use of data communications to help alleviate voice congestion and to help controllers and pilots accurately exchange more information. Because FAA’s current system cannot do this, it has been leasing data link services from ARINC. However, even with the planned improvements, this service will not be able to meet FAA’s projected need for more data communications. As FAA relies more on data communications, this leased system will not be able to meet the agency’s need to prioritize those messages that must be delivered expeditiously. Recognizing that accurately projecting the growth in aviation’s need for data link communications beyond 15 years would be difficult, FAA is designing a system to provide a sevenfold increase in capacity to meet future needs. During the 1990s, several of FAA’s studies found that, historically, increases in air traffic were closely related to the growing need to assign more channels for voice communications (see fig. 2). In its most recent study about the growing need for more channel assignments for voice communications, FAA found that this need had grown annually, on average, about 4 percent (about 300 new channel assignments) since 1974 (see fig. 3). This growth paralleled the increase in domestic air travel during that time frame. Despite the recent downturn in air traffic resulting from a recession and the September 11 terrorist attacks, FAA expects it to resume its historical 4-percent annual growth within a year or two. Currently, FAA’s voice communications system is limited to a maximum of 14,000 channel assignments. Because increases in air traffic require more new channel assignments, FAA expects that providing them in some metropolitan areas will become increasingly difficult. If the system is left unchanged, FAA has concluded that, as early as 2005, it could no longer fully support aviation’s need for voice communications and that in such high traffic metropolitan areas as New York, Chicago, and Los Angeles the need for additional assignments could be evident sooner. Because FAA has delayed NEXCOM’s implementation until 2009, the agency’s 23 planned improvement measures are designed to add approximately 2,600 additional channel assignments for voice communications. (See table 1.) FAA has classified these initiatives, which involve a variety of technical, regulatory, and administrative changes, according to how soon it expects to implement them. However, FAA recognizes that there is no guarantee that all of these measures can be implemented because some of them largely depend on gaining agreement from other entities, such as other federal agencies and the aviation community, and some may involve international coordination. FAA also recognizes that the exact degree of improvement resulting from the totality of these measures cannot be precisely projected and actual test results could show less gain than anticipated. Many of these initiatives involve reallocating channels being used for purposes other than air traffic services and increasing FAA’s flexibility to use already assigned channels. For example, FAA is reviewing its policy for assigning channels to such special events as air shows to determine if fewer channels could be assigned to them so that channels could be used for other purposes. While it is not possible to predict exactly when FAA’s existing voice communications system will run out of available channel assignments, agency and aviation representatives concur that, without the 23 improvement measures, the system will be strained to provide enough channel assignments. According to a MITRE Corporation study completed in 2000, even if the need for more channel assignments for voice communications were to grow at 2 percent per year (instead of FAA’s projected growth of 4 percent per year), by 2005 or sooner, it would be difficult for FAA to meet the need for air traffic communications in major metropolitan areas. MITRE also projected that the shortage of available channel assignments would become a nationwide problem by 2015 or sooner. In 2000, FAA first encountered a shortage problem when it had to reassign a channel from one location to another that FAA viewed as a higher priority in the Cleveland area. Figure 4 shows MITRE’s analysis of how the projected demand for more voice communications capacity will intensify if FAA does nothing to improve this system. Currently, FAA is leasing ARINC’s Aircraft Communications Addressing and Reporting System (ACARS) to provide data link communications that are not time critical, such as forwarding clearances to pilots prior to takeoff. Because this analog system is also reaching its capacity to handle data link communications, FAA plans to use ARINC’s new digital data communications system, known as Very High Frequency Digital Link Mode 2 (VDL-2) until 2009. By then, FAA expects to use its VDL-3 system, which is being developed to integrate voice and data communications, to meet aviation’s needs for about 1,800 channel assignments for data communications over the next 15 years and to prioritize messages that must be delivered expeditiously, which VDL-2 cannot provide. Because FAA believes that aviation’s need for data communications cannot be realistically projected beyond 15 years, it is designing a system to provide a sevenfold increase in capacity for data communications, thereby providing what it believes is an excess capacity that should meet aviation’s future needs. In consultation with stakeholders from the aviation industry, FAA selected VDL-3 as the preferred solution to meet its future communications needs. During the 1990s, FAA collaborated with its stakeholders to analyze many different communications systems, as well as variations of them, as potential candidates to replace its existing communications system. As a result of these studies, FAA eliminated several designs because they did not meet some of the fundamental needs established for NEXCOM. For example, FAA found that Europe’s Very High Frequency Digital Link Mode 4 (VDL-4) technology was too early in development to assess and that it would not provide voice communications, FAA’s most pressing need. Moreover, a vendor of VDL-4 recently told us that this technology still needed additional development to meet FAA’s communications needs and that the international community had not yet validated it as a standard for air traffic control communications, which could take at least an additional 3 years. In March 1998, FAA rated VDL-3 as the best of the six possible technologies to meet its future communications needs and the most likely to meet its schedule with the least risk. FAA found that VDL-3, the international model for aviation communications, could provide up to a maximum fourfold increase in channel capacity, but the increase is estimated to be three to fourfold because of initial deployment scenarios; transmit voice and data communications without interference; increase the level of security; provide voice and data communications to all users with minimal equipment replacement; require no additional channel splitting, thereby reducing the need for engineering changes; and reduce the number of ground radios required by FAA because each radio could accommodate up to four channels within the existing 25 kHz channel spacing. Although FAA and its stakeholders thought that each of the five other technologies had some potential to satisfy a broad range of their future needs, each was rejected during the 1998 evaluation process. (See table 2.) Academia and other experts have concluded that FAA’s rationale for rejecting alternative technologies in 1998 remains valid today. Specifically, the technical challenges facing these technologies have not been sufficiently resolved to allow FAA to deploy an initial operating system by 2005. For example, while satellite technology is used to provide voice and data communications across the oceans and in remote regions, it is expensive, it does not support the need for direct aircraft-to-aircraft communications, and does not meet international standards for air traffic control communications. Representatives from the National Aeronautics and Space Administration (NASA) told us that emerging technologies that could meet FAA’s need for voice and data communications could be developed and available by 2015. However, in further discussion with these representatives, they indicated that while such technologies might be mature enough to provide communications services, it may require additional time for them to meet all of the requirements associated with air traffic control safety systems. NASA officials commented that FAA initiated its plans for its new communications system at the outset of the emerging wireless technology explosion and was not able to assess and integrate any of these emerging technologies into the NEXCOM architecture. However, they noted that the telecommunications field is changing rapidly, and FAA and the aviation industry will need to continually assess their requirements and keep abreast of emerging technologies that could better meet their future communications needs. FAA’s planned approach for NEXCOM is to implement VDL-3 in three segments, as shown in figure 5. Currently, FAA’s senior management has only approved investments for the first segment. If FAA cannot demonstrate that VDL-3 can successfully integrate both voice and data in a cost-effective manner, FAA plans to implement a backup approach to meet the need for more channel capacity. FAA’s backup follows the Western European approach as follows: For analog voice communications, reduce the 25 kHz space between channels to 8.33 kHz. For digital data communications, rely on a commercial vendor that is developing a technology to support aviation’s need for data, known as VDL-2. However, this approach remains a backup because it doubles, not quadruples, voice channel capacity. Furthermore, it does not resolve the issues of radio interference and loss of communications that now confront FAA, nor does it meet all of the requirements for air traffic control data link communications. Before selecting VDL-3 as the technology for NEXCOM, FAA needs to demonstrate the technical and operational merits of VDL-3, certify VDL-3 as a “safety critical system,” and prove its cost-effectiveness to the aviation industry. To help address these issues, the FAA Administrator formed the NEXCOM Aviation Rulemaking Committee (NARC) in 2000. The NARC, composed of representatives from the aviation industry and other groups, submitted its final report in September 2001, which included recommendations to expedite the resolution of technical and operational issues involving NEXCOM. To demonstrate VDL-3’s technical and operational merits, FAA has scheduled a series of three tests of this technology, beginning in October 2002 and ending in October 2004. The first test is designed to demonstrate the quality of voice communications and the integration of voice and data communications. A key component of the second test is to demonstrate that new digital ground radios can work with new digital aircraft equipment and other equipment in FAA’s air traffic control system.Finally, in the third test, FAA plans to validate that VDL-3 can be certified as safe for aircraft operations. Moreover, to make VDL-3 fully operational will require FAA and users to undertake a phased installation of tens of thousands of new pieces of equipment. In addition to FAA and users installing radios with new transmitters and receivers, FAA would need to install new voice switches and workstations. FAA also needs to ensure that all the new equipment required for NEXCOM will be compatible with FAA’s existing equipment, especially the numerous types of voice switches as well as the local and wide area networks. Therefore, FAA estimates that it will take 5 years following the successful conclusion of its demonstration tests for it to install the new ground equipment, while the airlines install new aircraft equipment. Figure 6 shows FAA’s schedule to implement both voice and data digital communications. Because communications are critical to ensuring safe aircraft operations, FAA is developing a process to certify that VDL-3 and the new equipment it requires could be used in the National Airspace System. In April 2002, FAA’s teams responsible for developing and certifying VDL-3 drafted a memorandum of understanding that describes their respective responsibilities. They agreed to maintain effective communications among them as well as with the manufacturers developing VDL-3 equipment. (See table 3 for the schedule for certifying the radios that will be used with VDL-3.) To FAA’s credit, the agency is proactively seeking certification before making a final decision on VDL-3. The issue of cost effectiveness was raised by the NARC because it wanted FAA to fully analyze the airlines’ transition to digital radios before the agency requires their use. Convincing enough users to purchase VDL-3 radios might be difficult because some air carriers had recently bought 8.33 kHz radios for operation in Europe, and they would not be eager to purchase additional equipment. As part of its cost-benefit analysis, FAA is assuming a 30-year life cycle for NEXCOM; however, changing requirements coupled with the rapidly changing developments in telecommunications technology could reduce this life cycle. Without analyzing the costs and benefits under different confidence levels for other potential life cycles for NEXCOM while considering the impact of changing requirements and the effects of emerging technologies, FAA might find it more difficult to enlist the continued support of the aviation community for NEXCOM. FAA plans to begin analyzing the cost- effectiveness of NEXCOM in mid-2002, publish a notice of proposed rulemaking by January 2004, complete its cost-benefit analysis by mid- 2004, and publish its final rulemaking by June 2005. FAA officials agreed that it is important to continually evaluate the requirements of the future system and whether emerging technologies could reduce VDL-3’s cost- effectiveness prior to making the final selection. Throughout its rulemaking process, program officials stressed that they plan to continue involving all key FAA organizations and the aviation industry. FAA’s approach for selecting its NEXCOM technology appears prudent. The FAA officials managing NEXCOM have worked with the aviation industry and involved other key FAA organizations to help ensure that the technical and operational, safety, and cost-effectiveness issues are resolved in a timely manner. However, FAA is only in the early stages of resolving these three issues, and the program’s continued success hinges on FAA’s maintaining close collaboration with major stakeholders. FAA’s follow-through on the development of a comprehensive cost-benefit analysis, which considers how changing requirements and emerging technologies could affect the cost effectiveness of VDL-3, will be key to this success. Otherwise, the aviation community might not continue to support FAA in developing NEXCOM, as they now do. To make the most informed decision in selecting the technology for NEXCOM and continue to receive the support from the aviation community, we recommend that the Secretary of Transportation direct the FAA Administrator to assess whether the requirements for voice and data communications have changed and the potential impact of emerging technologies on VDL-3’s useful life as part of its cost-effectiveness analysis of NEXCOM. We provided the Department of Transportation, the Department of Defense, and the National Aeronautics and Space Administration with a draft of this report for review and comment. The Department of Defense provided no comments. The Product Team Lead for Air/Ground Voice Communications and officials from Spectrum Policy and Management, FAA, indicated that they generally agreed with the facts and recommendation. These officials, along with those from the National Aeronautics and Space Administration, provided a number of clarifying comments, which we have incorporated where appropriate. To determine the extent to which FAA’s existing communications system can effectively meet its future needs, we interviewed officials from FAA’s NEXCOM program office, the agency’s spectrum management office, union officials representing the air traffic controller and maintenance technician workforces, representatives of the MITRE Corporation, and members of the NARC, an advisory committee formed by FAA to help ensure that NEXCOM meets the aviation industry’s needs. We reviewed documentation on the current status of FAA’s existing air-ground communications system as well as documentation on potential measures FAA plans to take to increase the channel capacity of its existing system. To determine what FAA did to help ensure that its preferred technology for NEXCOM will meet aviation’s future needs, we interviewed officials from FAA’s NEXCOM program office; officials from the Department of Defense, the National Aeronautics and Space Administration, and Eurocontrol; an expert in satellite communications from the University of Maryland; and contractors who offer VDL-2 and VDL-4 communications services. We reviewed documentation indicating to what extent varying technologies could meet FAA’s time frames for implementing NEXCOM. We also reviewed documentation indicating how well varying technologies could meet FAA’s specifications for NEXCOM. We did not perform an independent verification of the capabilities of these technologies. Additionally, we reviewed studies performed by FAA in collaboration with the U.S. aviation industry to assess alternative technologies for NEXCOM that led the U.S. aviation community to endorse FAA’s decision to select VDL-3 as its preferred technology for NEXCOM. To identify issues FAA needs to resolve before it can make a final selection for NEXCOM’s technology, we interviewed officials from FAA’s NEXCOM program office as well as members of the NARC. We also reviewed NEXCOM program office documentation that prioritizes the program’s risks, assesses their potential impact on the program’s cost and schedule, and describes the status of FAA’s efforts to mitigate those risks. In addition, we reviewed the NARC’s September 2001 report that made recommendations to FAA for modernizing its air-ground communications system. We conducted our review from September 2001 through May 2002, in accordance with generally accepted government auditing standards. We are sending copies of this report to interested Members of Congress; the Secretary of Transportation; the Secretary of Defense; the Administrator, National Aeronautics and Space Administration, and the Administrator, FAA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at dillinghamg@gao.gov. Key contributors are listed in appendix I. In addition to those individuals named above, Nabajyoti Barkakati, Geraldine C. Beard, Jeanine M. Brady, Peter G. Maristch, and Madhav S. Panwar made key contributions to this report.
The Federal Aviation Administration (FAA) provides air-ground voice and data communications for pilots and air traffic controllers to safely coordinate all flight operations, ground movement of aircraft at airports, and in-flight separation distances between aircraft. However, the anticipated growth in air traffic, coupled with FAA's efforts to reduce air traffic delays and introduce new air traffic services, will create a demand for additional channels of voice communications that FAA's current system cannot provide. FAA and the aviation industry agree that the existing communications system, even with enhancements, cannot meet aviation's expanding need for communications. To ensure that the technology it wants to use for Next Generation Air/Ground Communications (NEXCOM) will meet its future needs, FAA, in collaboration with the aviation industry, conducted a comparative analysis of numerous technologies, to assess each one's ability to meet technical requirements, minimize program risk, and meet the agency's schedule. However, before making a final decision on the technology for NEXCOM, FAA will need to efficiently address three major issues: whether the preferred technology is technically sound and will operate as intended, if the preferred technology and the equipment it requires can be certified as safe for use in the National Airspace System, and whether it is cost effective for users and the agency.
PRWORA ended AFDC, which provided states with federal funds to share states’ costs for monthly cash assistance to eligible low-income families, and created TANF. Congress has provided states with $16.5 billion per year in fixed federal TANF funding to cover cash benefits, administrative expenses, and services primarily targeted to needy families; the amount does not vary according to the number of cash assistance recipients, referred to as the TANF caseload. Under TANF, states are given flexibility in setting various welfare program policies. For example, states generally determine cash assistance benefit levels and eligibility requirements. States are also generally allowed to spend TANF funds on other services as long as these services meet TANF purposes, which are: (1) to provide assistance to needy families so that children may be cared for in their own homes or homes of relatives; (2) to end dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) to prevent and reduce out-of- wedlock pregnancies; and (4) to encourage two-parent families. Federal law sets some conditions for states receiving federal funds for TANF. For example, states are required to maintain a specified level of their own past spending on certain welfare programs to receive all of their TANF funds, referred to as state maintenance of effort (MOE). In addition, states must ensure that a minimum percentage of families receiving cash assistance meet work participation requirements set in law, referred to as the work participation rate. Activities creditable towards meeting work participation rates are defined in federal law and are generally focused on participants gaining employment, work-related skills, and vocational education. States that do not meet minimum work participation rates may be penalized by a reduction in their block grant. States can use various policy options to help them meet their work participation rates, such as by reducing cash assistance caseloads and spending state funds for TANF purposes above the required MOE amount. In addition, states are limited in the amount of time they can provide federal cash assistance to families. In general, states may not use federal TANF funds to provide cash assistance to a family that includes an adult who has received cash assistance for 5 years or more. Such time limits do not apply to other TANF-funded services. The Deficit Reduction Act of 2005 reauthorized the TANF block grantand included changes expected to strengthen the work participation rate requirement for states, among other changes. TANF is authorized through March 27, 2013. Federal law sets forth the basic TANF reporting requirements for states. For example, states are required to provide information and report on their use of TANF funds to HHS through quarterly reports on demographic and economic circumstances and work activities of families receiving cash assistance, state TANF plans outlining how each state intends to run its TANF program, and quarterly financial reports providing data on federal TANF and state MOE expenditures, among other things. HHS reviews state information and reports to ensure that states meet the conditions outlined in federal law. For example, HHS uses information on demographic and economic circumstances and work activities of families receiving cash assistance to determine whether states are meeting work participation rates. For quarterly financial reports, HHS collects information on two types of state expenditures. 1. Assistance, which we refer to throughout the report as cash assistance, primarily includes monthly cash payments directed at ongoing, basic needs. 2. Nonassistance, which we refer to throughout the report as non-cash services, can include any other services meeting TANF purposes. These include services such as job preparation activities, child care and transportation assistance for parents who are employed, family formation efforts, and child welfare services, as well as some cash benefits such as non-recurring short-term benefits and refundable tax credits to low-income working families. The distinction between cash assistance and non-cash services is important because only families that receive cash assistance are included in the work participation rate calculation and are subject to time limits on receiving federally-funded cash assistance. Such conditions do not apply to families who receive non-cash services. Amid concerns regarding limited information on TANF expenditures, Congress included additional reporting requirements in the Claims Resolution Act of 2010, which extended TANF authorization through September 2011. The act required states to submit additional information to HHS on nonassistance (or non-cash services) broadly categorized on HHS’s expenditure reporting form as either “other” or “authorized solely under prior law” for March 2011 and April through June 2011. The act only required these reports in 2011, and did not require on-going reporting for following years. Expenditures in these categories made up nearly 28 percent of all federal TANF and state MOE spending for non- cash services nationwide in fiscal year 2011, and the resulting reports indicated that over half of them were for child welfare services. The major contrasts between the funding structure of the TANF block grant and its predecessor became apparent in the early years of TANF. When TANF was first implemented in fiscal year 1997, on average over 3.9 million families were receiving cash assistance every month. This number declined by over half within the first 5 years of TANF, and averaged about 1.9 million families in fiscal year 2011. The composition of the overall TANF caseload also changed, with the percentage of “child- only” cases increasing from about 23 percent from July through September 1997 to over 40 percent in fiscal year 2010. These cases consist of families receiving cash assistance on behalf of children only, in contrast to other cases in which adults in the families also receive benefits on their own behalf. Generally, in child-only cases, the parent or adult caregiver is not eligible for benefits for one or more of a variety of reasons, such as receipt of other federal benefits or immigration status. With the financial structure of the block grant, states have generally maintained access to their full TANF block grant allocation each year and have still been required to meet minimum MOE requirements, even as cash assistance caseloads declined. We examined issues related to the federal-state fiscal partnership under TANF in 2001 amid concerns that states would replace their own spending with federal TANF funds— thereby freeing up state funds for other purposes, including tax relief. Although we have not updated this work, we found at that time that the MOE requirement, in many cases, limited the extent to which states used their federal funds to replace state funds. Declining cash assistance caseloads also freed up federal TANF funds that states could save under a “rainy day fund” for use in future years, providing states additional flexibility in their budget decisions. In fact, we reported in 2010 that many states had some TANF reserves that they drew down to meet increasing needs in the recent economic downturn. Over time, states also used TANF flexibility to shift spending to non-cash services. In fiscal year 1997, nationwide, states spent about 23 percent of federal TANF and state MOE funds on non-cash services. In contrast, states spent almost 64 percent of federal TANF and state MOE funds for these purposes in fiscal year 2011 (see fig. 1). The shift in combined federal TANF and MOE spending over time is also reflected in federal and state spending when considered separately. In fiscal year 1997, nationwide, states spent about 23 percent of federal TANF funds for non-cash services, compared to about 58 percent in fiscal year 2011 (see fig. 2). An even greater shift occurred in MOE spending patterns over time. While in fiscal year 1997, nationwide, states spent about 23 percent of state MOE funds for non-cash services, this rose to about 70 percent in fiscal year 2011. The increased emphasis on non-cash services is widespread among the states. Thirty-four states spent half or more of their federal TANF funds for non-cash services in fiscal year 2011. Fifteen of these states spent three-quarters or more of their federal TANF funds in this way (see fig. 3). The move away from traditional cash assistance toward non-cash services by states is not necessarily driven by reduced need for cash assistance among low-income families. Several factors have affected the early decline and continued low levels of cash assistance since states implemented TANF. The initial decline occurred during a strong economy where federal support for work supports like child care increased and TANF provided new program emphasis on work. Many former welfare recipients increased their income through employment, and employment rates among single parents increased. At the same time that some families worked more and had higher incomes, others had incomes that left them still eligible for cash assistance. However, many of these eligible families were not participating in the program. According to our estimates in a 2010 report, the vast majority—87 percent—of the caseload decline through 2005 can be explained by the decline in eligible families participating in the program, in part because of changes to state welfare programs. These changes included mandatory work requirements; changes to application procedures; lower benefits; policies such as lifetime limits on assistance; diversion strategies such as providing one- time, non-recurring benefits instead of monthly cash assistance to families facing temporary hardships; and sanctions for non-compliance, according to a review of the research. Among eligible families who did not receive cash assistance, 11 percent did not work, did not receive means- tested disability benefits, and had very low incomes. While we have not updated this analysis, some recent research shows that this potentially vulnerable group may be growing.number of families in poverty and those receiving cash assistance through TANF is not as strong as it has been in the past (see fig. 4). In fiscal year 2011, nationwide, the top areas of state spending of federal TANF funds for non-cash services were child welfare, emergency aid, and other services; job preparation and work activities; and work supports including child care.populations for services and delivery methods differed within and across these three spending areas. State decisions on how to allocate funding for non-cash services were influenced by state priorities and TANF’s funding structure, according to officials we interviewed. In fiscal year 2011, nationwide, states spent federal TANF funds for non- cash services in common areas including child welfare, emergency aid, and other services; job preparation and work activities; and work supports including child care. These spending areas accounted for 70 percent of over $8.7 billion in federal TANF funds spent on non-cash services nationwide that year (see fig. 5). As shown in figure 6, based on each state’s spending for non-cash services, these areas—child welfare, emergency aid, and other services; job preparation and work activities; and work supports—also represented the three areas most frequently emphasized by states. For example, 18 states spent the largest percentage of their federal TANF funds for non- cash services for child welfare, emergency aid, and other services and 17 states spent the largest percentage for job preparation and work activities. The spending area referred to as child welfare, emergency aid, and other services includes a range of services categorized as “authorized solely under prior law” and “other,” which were primarily child welfare services. According to expenditures reported by states in HHS’s report to Congress required by the Claims Resolution Act of 2010 for April through June 2011, states spent over 54 percent of federal funds categorized as “authorized solely under prior law” and “other” combined on child welfare services. States spent an average of 29 percent of their federal TANF funds for non-cash services in this area, ranging from under 5 percent in 12 states to over 85 percent in 2 states. TANF requires each state to engage a specified percentage of families receiving cash assistance in work or work-related activities, and combined, states had spending on job preparation and work activities totaling over $1.9 billion in fiscal year 2011. Nationwide, 17 states had these services as a top spending area for federal TANF funds for non- cash services that same year. Overall, states spent an average of about 25 percent of their federal TANF funds for non-cash services in this area, ranging from under 5 percent in eight states to 79 percent in one state.Expenditures are not reported in a way to determine what portion of spending in this area is spent on those receiving cash assistance versus other eligible low-income individuals. Eight states had work supports as a top spending area for federal TANF funds for non-cash services in fiscal year 2011. We reported in 2006 that growth in TANF spending for work supports, particularly for child care, reflected state efforts to support employment as these supports helped many families formerly receiving cash assistance maintain jobs. States spent an average of about 13 percent of their federal TANF funds for non- cash services in this area, ranging from under 5 percent in 25 states to 67 percent in 1 state. While states spent a large portion of their federal TANF funds in these areas, we found in our interviews with selected states that target populations for services and delivery methods can differ. The following provides examples of these differences in our selected states for child welfare, emergency aid, and other services; job preparation and work activities; and work supports. Among our selected states, federal TANF funds were used to support child welfare services, such as child abuse hotlines, investigative and legal services, child protection, and preventive services as well as emergency aid, such as clothing and shelter. Child welfare services are generally provided to children and their families to prevent the occurrence of child abuse or neglect, to help stabilize the family and prevent the need to remove the child from the home if abuse has occurred, and to improve the home and enable the child to reunite with his or her family if the child has been removed from the home. Officials in several of our ten selected states said that TANF funds helped expand existing child welfare programs that were also funded with other federal sources, such as Title IV-E of the Social Security Act for foster care payments and adoption assistance, Medicaid for health care coverage for low-income individuals including children, Title IV-B of the Social Security Act for child and family services to promote the welfare of children, and Social Services Block Grant (SSBG) funds for states to provide social services to meet certain needs of individuals residing within each state. The officials noted that TANF’s flexibility allowed them to meet budgetary needs in this area. One study shows that states rely on federal TANF funds to help support children and families served by state child welfare agencies (see fig. 7). In addition to child welfare services, selected states used funds in this spending area to provide a variety of other services. For example, the District of Columbia used federal TANF funds to support homeless shelters, provide case management, and conduct home visits to families formerly receiving cash assistance. Among our selected states, job preparation and work activities included job readiness training related to resume-writing and interview preparation, help with the job search process, skills training, and subsidized employment. These activities provided work-related assistance that typically counts toward the state’s work participation requirement, and that the state must track for reporting and compliance purposes. Officials in one selected state noted that they also provided activities such as English as a Second Language courses that do not count toward meeting work participation requirements. Officials in 5 of our 10 selected states said they provide services like resume and interview assistance through contractors or directly through the state. While selected states provided similar services, the populations served and delivery methods often differed. For example, California targets its non-cash services to families receiving cash assistance, with the exception of those receiving short-term aid in an effort to divert them from the caseload. Its TANF-funded services promote job preparation and work activities directed at this population. Other states we reviewed said they provide certain non-cash services to low-income families regardless of whether they receive cash assistance. For example, Arkansas and Washington use federal funds from TANF to partner with local colleges and businesses to provide tailored education and training opportunities designed to meet the needs of local industries. Arkansas officials said that the state’s Career Pathways program provides eligible individuals who have children, such as cash assistance recipients and those with with education incomes up to 250 percent of the federal poverty line,and career training at participating community colleges for high demand jobs. Arkansas officials noted that the program was originally going to be supported using federal funds under the Workforce Investment Act, but these funds were not available, so TANF funds were used instead. Meanwhile, Florida and Utah coordinate work-related services with those provided through the Workforce Investment Act one-stop center system, through which job seekers can access most federally-funded employment and training programs and services. Among our selected states, work supports primarily included child care subsidies or vouchers for low-income families that are working, which may include those receiving cash assistance. Selected states provided child care services similarly through statewide child care systems, counties, or contract vendors. Officials in several selected states said they use TANF funds to provide child care services in combination with federal funds from the Child Care and Development Fund (CCDF), which helps states provide child care subsidies for low-income families. This practice of using both TANF and CCDF funds for child care services was also noted in our previous work, which indicated that states use a combination of TANF, CCDF, TANF funds transferred to CCDF, SSBG, and state funds to provide child care subsidies to low-income families. Officials in several of our selected states said that TANF funds helped them address unmet needs and expand services provided through CCDF to larger populations. However, they also noted that even with these combined funding sources, they have had waitlists for child care subsidies in their state. Our prior work shows that waitlists are not always an accurate indicator of need. For example, in our 2005 and 2010 reports on the decline of the number of children served by CCDF, we noted that states have made changes since 2001 that could decrease the number of families that can access child care but could also provide larger subsidies to those who receive services. These included eligibility and enrollment changes, increased provider payment amounts, and increased co- payment amounts for families (see fig. 8). An official we spoke with in one state said that they do not use waitlists, and instead adjust key features of their child care subsidy program, such as eligibility criteria, to match the resources they have available. These adjustments allow them to avoid waitlists but also make some families that could potentially benefit from the program unable to participate. For additional information on selected states’ TANF programs and spending for non-cash services, see appendix III. TANF’s funding structure has given states flexibility in making decisions regarding non-cash services. As mentioned earlier, the dramatic caseload declines during the first few years of TANF’s implementation allowed states to spend federal funds not used on cash assistance for new or existing non-cash services. For example, Louisiana officials said their state’s caseload declines freed up federal TANF funds for new programs to encourage marriage, provide pre-kindergarten services, and help prevent out-of-wedlock pregnancies. In fiscal year 2010, Louisiana spent 71 percent of its federal TANF funds for non-cash services on these efforts. Further, they noted that caseloads continued to decline or stayed the same, since many families that would have been eligible for cash assistance left the state following Hurricane Katrina. Officials in several other selected states also said that federal TANF funds were spent on existing or new programs according to state legislative priorities, and, as a result, funds are often allocated to and administered through multiple state and local agencies. This is in contrast to TANF’s predecessor program, AFDC, which was typically administered through state welfare agencies. More specifically, in 2 of our 10 selected states, officials said that federal TANF funds were allocated directly to a lead agency, usually the state TANF office, which may have allowed it to focus funds in specific areas. For example in Utah, federal TANF funds were generally provided first to its Department of Workforce Services. While the department had agreements with other state agencies to provide services, 63 percent of its federal TANF funds for non-cash services in fiscal year 2010 were used for job preparation and work activities. Similarly, in Louisiana, federal TANF funds were generally provided to the state Department of Child and Family Services, which used interagency agreements to support its emphasis on the family formation and out-of-wedlock pregnancy prevention efforts mentioned above. In contrast, federal TANF funds can be allocated to multiple agencies through a state’s annual legislative budget process. For example in Florida, federal TANF funds went to several agencies that provided a variety of services to low-income families as well as those receiving cash assistance (see fig. 9). Florida officials said legislative priorities can shift from year to year, and recent emphasis has been on out-of-wedlock pregnancy prevention programs and child welfare initiatives, such as protective investigations and adoption subsidies. States’ use of federal TANF funds for a broad array of non-cash services beyond traditional cash assistance can create tensions and trade-offs in state funding decisions, particularly in times of severe fiscal constraints. Officials in three of our selected states cited tensions between the need to provide cash assistance and the need to provide other state services. They noted that this has become more apparent as the number of families needing cash assistance increased during the recent economic downturn. Officials in five selected states cited recent spending reductions in non-cash areas including job preparation and work activities, and officials in one state noted the need to reduce family formation efforts, particularly after American Recovery and Reinvestment Act of 2009 funds were no longer available. To help manage costs, states may make changes to key elements of their cash assistance programs, such as adjusting eligibility criteria, benefit levels, and other features. For example, officials in one selected state said that instead of reducing spending for non-cash services to meet increased need for cash assistance during the recession, the state recently enacted more stringent eligibility criteria and reduced benefit amounts for cash assistance. They explained that their state legislature allocates TANF funds to the cash assistance program just like any other program for non-cash services and thus, funding is not shifted between programs to accommodate increased need. Almost no federal requirements or benchmarks exist as to eligibility criteria or benefit amounts or on the percentage of low-income families who are to be covered by a state’s cash assistance program. Officials in 9 of our 10 selected states said that the state allocates funds for cash assistance based on caseload projections using data from previous years. Remaining funds are then available for non-cash services. Although the TANF block grant has evolved into a flexible funding stream that states use to support a broad range of allowable services while also serving as the nation’s major cash assistance program for low-income families with children, the accountability framework currently in place in federal law and regulations has not kept pace with this evolution. As a result, there is incomplete information available for assessing TANF performance. Under federal law and regulations, states are required to submit several reports to HHS related to TANF. These generally include: quarterly reports on demographic and economic circumstances and work activities of families receiving cash assistance; state TANF plans outlining how each state intends to run its TANF program, generally filed every two years; quarterly financial reports providing data on federal TANF and state quarterly state MOE reports providing data on families receiving cash benefits under separate state programs, which are funded entirely with state MOE funds and are not subject to certain federal requirements; and annual single audit reports resulting from required audits of nonfederal entities that expend federal funds. Taken together, this set of reports and the information provided serves as the accountability framework in place to help HHS and Congress ensure that states use TANF funds in keeping with the block grant’s purposes and identify any program improvements that may be warranted. Yet, these numerous requirements provide limited information on state strategies for using their TANF funds for non-cash services. Our past work has shown that a sound accountability framework includes (1) defining desired outcomes, (2) measuring performance to gauge progress, and (3) using performance information as a basis for decision- making. This requires complete, accurate, and reliable data. However, this type of performance information is not available for a majority of TANF funds nationwide. There are no reporting requirements mandating performance information specifically on families receiving non-cash services or their outcomes, or information related to TANF’s role in filling needs in other areas like child welfare, even though this has become a more prominent spending area for TANF funds in many states.reporting gaps limit the information available for oversight of TANF block grant funds by HHS and Congress. State TANF plans serve as a potential source of useful program information. However, they currently provide limited descriptions of a state’s goals and strategies for its TANF block grant, including how non- cash services fit into these goals and strategies, and the amount of information in each plan can vary by state. Federal law includes general language on what should be included in the state TANF plan. For example, the law states that plans are to outline how a state will “conduct a program…that provides assistance to needy families with (or expecting) children and provides parents with job preparation, work, and support services to enable them to leave the program and become self-sufficient.” Federal law does not require states to include descriptions in their state plans of how they intend to use TANF funds beyond the cash assistance population for non-cash services, and states have used their discretion in For example, a state determining how much detail to put in their plans. plan prepared by one of the selected states outlined its cash assistance program and provided descriptions of a variety of non-cash services it intends to provide. In contrast, the state plan of another selected state described its intentions to provide supportive services, particularly to families who have exhausted cash assistance benefits, but did not describe what those services would be. HHS officials also noted that they do not have the authority to require states to include basic information about their cash assistance programs, including state TANF eligibility criteria, benefits levels, and other program features. The financial reports on federal TANF and state MOE expenditures also provide some information on the types of non-cash services provided by states, but recent HHS studies and officials in most selected states we spoke to have noted some weaknesses in the information collected from states. Specifically, an HHS study from 2009 reviewed most states’ expenditures and noted incomplete and inconsistent information related to HHS’s current TANF expenditure reporting form for states. HHS identified similar issues in its reports to Congress required under the Claims Resolution Act of 2010, which examined more detailed information from states on TANF expenditures reported on the form. For example, the reports show that spending for child welfare services is often reported in the “other” category for non-cash services as well as the “authorized solely under prior law” categories for cash assistance and non-cash services. In addition, the reports noted inconsistencies between states with the activities counted under the form’s reporting categories. Officials in 7 of the 10 selected states said that the form does not fully capture the purposes of their TANF spending. For example, one state official described how their state’s use of TANF funds for child welfare services is not identifiable in the form’s reporting categories. Also, current expenditure reporting does not provide data in a way that allows distinctions between expenditures made on behalf of cash assistance recipients to help them find employment and leave welfare, and expenditures provided to other individuals and families not directly related to welfare-to-work purposes. While state plans and expenditure reports individually provide some information on non-cash services, even when considered together, they do not provide a complete picture on state goals and strategies for uses of TANF funds. This is because the state plan is not required to be written in a way that connects to HHS’s financial reporting categories. This makes it difficult to determine how and whether spending areas fit into each state’s stated goals and strategies. One state official we interviewed said that with the current reporting requirements, it was hard for them to know how much TANF funding each of their state programs were using and what benefit the state was getting from each program. As a result, the state developed an additional internal report that presents the costs of performing activities by program, which provides it with better information for assessing the return on investment for each program. Officials from another state also said that it might be helpful to have the state plan more closely tied to the TANF expenditure reporting form, but they would want very specific instructions for how this should be done. HHS officials noted the department’s recent efforts to improve TANF expenditure reporting and acknowledged that reporting could be improved in certain other areas as well. HHS officials said they are revising the TANF expenditure reporting form to the extent permitted by law to include additional reporting categories, such as those related to child welfare services. They said they are also revising reporting instructions for states to improve consistency across states. Officials noted the importance of considering the implications for states of any changes or additions to current reporting requirements. For example, some state officials we interviewed described how new or revised reporting requirements can require costly and time-consuming changes to automated and other systems and practices in states and localities, and need to be carefully considered in terms of burden and appropriate timing for states. HHS officials were unable to provide a detailed plan with specific timeframes for the reporting revisions, but said that they are working on them, that they will seek input from relevant parties, and that when the revisions are finalized, they will be shared with Congress to assist in potential TANF reauthorization. In commenting on a draft of this report, HHS stated that it intends to publish draft revisions and instructions for comment in early 2013, with a goal of implementing the revisions for fiscal year 2014. The work participation rate for states, established in law and focused on families receiving cash assistance, serves as a key performance measure for state TANF programs. This focus remains, even though the cash assistance component of TANF no longer reflects how most TANF funds are spent. Our 2010 report shows that the emphasis on the work participation rate as a measure of program performance has helped change the culture of state welfare programs so that they focus on moving families off cash assistance and into employment.held accountable for ensuring that a specified percentage of all families receiving TANF cash assistance, and considered work-eligible, participate States are in one or more of the federally-defined allowable activities for the required number of hours each week. We noted in our 2010 report that while the rate specified in law is 50 percent, states have used various policy options, such as credits for caseload reductions and spending above required MOE amounts, to reduce their required rates below 50 percent, as permitted by law. TANF also provides states some flexibility regarding which families to include or exclude in calculating their rates. Our 2010 report noted that over the years, states have typically engaged about one- third of work-eligible families in allowable work activities nationwide and generally met their reduced rates. State participation rates have remained essentially the same since TANF’s implementation, despite legislative changes in 2005 that were generally expected to strengthen the work requirements, as we also reported in 2010 and again in 2011. We also noted in 2012 that the TANF work participation rate requirements, as enacted, in combination with the allowable credits and flexibility provided to states, may not serve as an incentive for states to engage more families in work. Our previous work and our work in selected states also shows that the work participation rate measure may not capture aid and services that states believe are important and that it may also serve as a disincentive to work with families with complex needs. All 10 selected states were using federal TANF funds to offer a range of non-cash services that could, for example, help remove barriers to work and/or keep families off the cash assistance caseload. A few of these states provided emergency aid to help meet low-income families’ immediate needs, including housing, child care, and transportation. These efforts are not captured in the key performance measure, the work participation rate. Also, some officials in several selected states also said that the pressure to meet TANF work participation rate requirements causes them to focus on the “ready to work” cash assistance population, which can leave the “harder- to-serve” population without services. In our interviews with state officials in the 10 selected states, we found that eight said their states had developed or are developing performance measures of their own that include at least some TANF non-cash services. Officials from seven of these eight states said that their states had tracked information that included the number of people served by some state programs that used federal TANF funds for non-cash services. In addition, of these eight states, officials in Washington and the District of Columbia said they are going through a “re-design process” for their cash assistance program. For example, they are more closely aligning services across multiple state agencies to provide comprehensive services to meet the individual needs of families receiving cash assistance and to help them attain self-sufficiency. Washington officials said they are developing alternative measures of family well- being to measure the effectiveness of TANF as a whole for these families under the re-designed TANF program. Examples of measures Washington officials are considering for families receiving cash assistance include examining whether parents are attaining higher levels of education, training, and financial literacy; whether children have increased access to early childhood and preschool programs; and whether families have increased access to health care, stable housing, and supports for family conflict and domestic violence. Several features of TANF pose challenges to designing performance measures, as indicated by our previous work. In our 2006 report on improving performance accountability in grant programs, we noted that some grant features in particular affect the difficulties of designing accountability mechanisms. These features include the extent to which a grant: operates as a funding stream rather than a distinct program, and supports a limited or diverse array of objectives. We also said in our 2012 guidance on designing evaluations that a block grant, with loosely defined objectives that simply adds to a stream of funds supporting on-going state programs, presents a significant challenge to efforts to portray the results of the federal program. Moreover, we noted in 1995 that accountability for block grants can be difficult since accountability provisions need to strike a balance between the potentially conflicting objectives of increasing state flexibility while attaining certain national objectives—a balance that inevitably involves philosophical questions about the proper roles and relationships among the levels of government in our federal system. The four stated TANF purposes in the law that generally define allowable use of funds for states are broad, so the ways in which states use TANF funds can often be complex and varied across states. Also, as discussed previously, as allowed under TANF, states have used TANF funds to expand existing state programs that may be funded with other federal sources, such as Workforce Investment Act funds for employment and training services; CCDF funds for child care; and SSBG and Title IV-B and E funds of the Social Security Act for child welfare services. While accountability for the TANF block grant can be challenging, general principles of performance measurement can help guide the development of improved performance information. As we cited earlier, our previous work noted that an essential first step in any system of performance information and measurement is to establish goals to be achieved through the relevant program or funding stream. This work also identified characteristics of successful performance measurement systems. These include ensuring that performance measures are tied to program goals, demonstrate the degree to which the desired results were achieved, and take into account stakeholder concerns. In addition, real world considerations, such as the cost and effort involved in gathering and analyzing data, must be taken into account while striving to collect sufficiently complete, accurate, and consistent data to be useful for decision makers. Moreover, other key decisions in establishing performance measures relate to whether to link penalties or rewards to any such measures. Although in many situations HHS can revise its reporting form to make adjustments to the reporting categories, generally HHS has limited authority to impose new TANF reporting requirements on states unless directed by Congress, so many changes to the types of performance information that states are required to report would require congressional action. Over the years, TANF has clearly evolved beyond a traditional cash assistance program and now also serves as a source of funding for a broad range of services states provide to other eligible families. States still spend some portion of TANF funds on welfare-to-work programs for the cash assistance population, but their new and varied uses of TANF funds for non-cash services over time beyond this population raise questions about how state efforts are contributing to TANF purposes. Yet, without an accountability framework that encompasses the full breadth of states’ uses of TANF funds, Congress will not be able to fully assess how funds are being used, including who is receiving services or what is being achieved. We acknowledge HHS’s steps toward improving TANF expenditure reporting and its concerns about reporting revisions for states. Any efforts to require more information or make changes to existing reporting and performance measures must consider this potential reporting burden for states. At the same time, gaps in TANF reporting and performance information make it difficult for policymakers to fully assess the workings of TANF. If Congress determines that TANF, as currently structured, continues to address its vision for the program, improved reporting and performance information will be important to enhance Congress’ decision making and oversight of TANF in the future. To provide better information for making decisions regarding the TANF block grant and better ensure accountability for TANF funds, Congress should consider ways to improve reporting and performance information so that it encompasses the full breadth of states’ uses of TANF funds. As HHS takes steps to revise expenditure reporting for TANF to better understand how states use TANF funds, it should develop a detailed plan with specific timelines to assist in monitoring its progress for revising its financial reporting categories for expenditures of federal TANF and state MOE funds. We provided a draft of our report to HHS for review and comment. HHS indicated in its general comments (see appendix IV) that it agrees that current reporting on TANF expenditures provides limited information on the range of ways in which states use federal TANF and state MOE funds. HHS noted that it intends to publish draft revisions to its reporting categories for TANF expenditures and instructions for states for comment in early 2013, with a goal of implementing the revisions in fiscal year 2014. We have added this information to the report. We commend HHS’s efforts to improve TANF expenditure reporting, and maintain that a detailed plan with timelines for revising the reporting categories will facilitate monitoring of its progress and help ensure that the revisions are implemented in a timely fashion. We also agree with HHS that as it works to improve financial reporting, it will be helpful to develop more refined categories of spending than the current categories in existing federal reporting, and to look at overall usage of funds, including transfers and MOE spending. In addition, HHS said that it lacks the authority to require states to provide certain types of information in their state plans, such as plans for using TANF funds or meeting MOE requirements as well as strategic goals or performance targets or measures. HHS noted that absent a statutory change, it cannot add additional categories of required information to the state plan, and any decision to establish such new requirements is one for Congress to consider. HHS also noted that the report underscores that a large share of TANF spending now goes to categories of spending other than cash assistance, and that improved information can assist in considering both appropriate allowable expenditure categories and the potential for performance measurements for these other categories of TANF and MOE spending. In addition to these general comments, HHS also provided us with technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: HHS Categories of Expenditures for Cash Assistance and Non-Cash Services on the Form ACF-196 expenditures on non-recurrent short-term benefits or subsidized employment. It also includes TANF Contingency funds provided to states when certain triggers indicate increased needs. HHS’s TANF expenditure reporting form, the Form ACF-196, includes 13 categories for states to report spending for non-cash services. We combined HHS TANF expenditure reporting categories under the following “spending areas” for the purposes of our report. This appendix provides selected TANF-related information—such as TANF caseload and spending data—as well as data on numbers of families and children in poverty for each of the 10 states we reviewed in this report: Arkansas, California, Colorado, the District of Columbia, Florida, Illinois, Louisiana, New York, Utah, and Washington. States were judgmentally selected to capture a variety of state characteristics including the proportion of federal and state funds states spent on TANF non-cash services; the proportion spent for specific non- cash services including child welfare, emergency aid, and other services, job preparation and work activities, and work supports such as child care; the total amount of federal and state expenditures for non-cash services; and organizational, geographic, and other considerations. These 10 states accounted for nearly half of all federal and state spending for TANF non-cash services in fiscal year 2010. Examples of programs and services provided Education, training, and job search services for TANF caseload families as well as incentive bonuses for families no longer receiving cash assistance to continue finding employment. Career Pathways initiative to increase access to education credentials to help TANF caseload and other low-income families attain higher paying jobs through partnerships with local colleges and businesses. Subsidized child care services primarily for TANF caseload families through the state’s child care system. Administration costs for the Departments of Workforce Services and Human Services. Examples of programs and services provided Education and training for TANF caseload families only. Wage subsidies for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Administration costs for both the counties and the state, in addition to some related costs for contractors. Domestic violence services for TANF caseload families only. Temporary transitional services such as child protection, family preservation, and case management to meet a specific crisis situation. Examples of programs and services provided Child welfare services for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Homeless Prevention and Rapid Re-housing Program, in partnership with the Colorado Housing and Finance Authority, for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Administrative costs for both the counties and the state. Child care tax credits for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Examples of programs and services provided Child care vouchers for TANF caseload families as well as other eligible low-income families not on the TANF caseload, delivered through the Office of the State Superintendent for Education. Child welfare services for TANF caseload families only, with case management services provided through the Department of Child and Family Services. Emergency aid such as shelter, food, and clothing for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Home visits to TANF caseload families to identify barriers to employment and link these families to needed services. Teen pregnancy prevention program through the Department of Health, which provides sex education to young women. Examples of programs and services provided Child welfare services including protective investigations, abuse hotlines, case management, and other family safety activities for TANF caseload families as well as other eligible low-income families not on the TANF caseload. School readiness child care program for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Education, training and work subsidies for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Examples of programs and services provided Child welfare screening, assessments, and investigations for TANF caseload families. Home visits and parent training for child welfare cases. Employment and training programs provided to TANF caseload families through contractors administered by the state. Child care certificate and voucher program for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Examples of programs and services provided Pre-kindergarten program to reduce out-of-wedlock pregnancies and encourage two-parent families by increasing literacy and responsible behavior for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Administration costs associated with multiple programs. Work-related activities, education, and skills training for TANF caseload families. Examples of programs and services provided Child protective and preventive services and maintenance of a child welfare hotline for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Administration costs including TANF eligibility determination. Work programs for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Examples of programs and services provided Employment, education, and job training services for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Subsidized employment for TANF caseload families. Administration and systems costs for the state. Healthy marriage promotion programs through education and training. After-school youth development programs to help prevent out-of-wedlock pregnancy. Examples of programs and services provided Vocational education and GED support generally through community colleges as well as job preparation and job search assistance for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Subsidized employment for TANF caseload families only. Child care assistance for TANF caseload families as well as other eligible low-income families not on the TANF caseload. Administration and systems costs for the state. Washington officials noted discrepancies between their fiscal year-end 2010 TANF expenditure data with the data HHS published for that year. Officials said further that discrepancies are likely due to differences in reporting time frames between the state and HHS. In addition to the contact named above, Robert Campbell, Gale Harris, Kristy Kennedy, Nhi Nguyen, Michael Pahr, and Michelle Loutoo Wilson made significant contributions to all aspects of this report. Also contributing to this report were James Bennett, Elizabeth Curda, Rachel Frisk, Alexander Galuten, Kathleen van Gelder, Thomas James, Edward Leslie, Jennifer McDonald, Ellen Phelps Ranen, Almeta Spencer, and Walter Vance. Temporary Assistance for Needy Families: Update on Program Performance. GAO-12-812T. Washington, D.C.: June 5, 2012. Temporary Assistance for Needy Families: State Maintenance of Effort Requirements and Trends. GAO-12-713T. Washington, D.C.: May 17, 2012. Temporary Assistance for Needy Families: Update on Families Served and Work Participation. GAO-11-880T. Washington, D.C.: September 8, 2011. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates. GAO-10-525. Washington, D.C.: May 28, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession’s Impact on Caseloads Varies by State. GAO-10-164. Washington, D.C.: February 23, 2010. Welfare Reform: Better Information Needed to Understand Trends in States’ Uses of the TANF Block. GAO-06-414. Washington, D.C.: March 3, 2006. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Welfare Reform: Challenges in Maintaining a Federal-State Fiscal Partnership. GAO-01-828. Washington, D.C.: August 10, 2001. Designing Evaluations: 2012 Revision. GAO-12-208G. Washington, D.C.: January 2012. Grants Management: Enhancing Performance Accountability Provisions Could Lead to Better Results. GAO-06-1046. Washington, D.C.: September 29, 2006. Block Grants: Issues in Designing Accountability Provisions. GAO/AIMD-95-226. Washington, D.C.: September 1, 1995.
The TANF block grant, created as part of the 1996 welfare reforms, gives states flexibility to make key decisions about how to allocate funds to provide services to low-income families. The number of families receiving cash assistance declined by over half within the first 5 years of TANF, and states shifted their TANF priorities to other forms of aid, or non-cash services. In fiscal year 2011, states spent about 64 percent of nearly $31 billion in federal and state funds for such services, with federal funds accounting for nearly $9 billion. GAO examined (1) how states have used TANF funds for non-cash services and (2) what information is available to assess TANF performance for non-cash services and what challenges are involved in doing so. GAO reviewed past reports and relevant federal laws and regulations; analyzed state TANF expenditure information; and interviewed HHS officials, TANF experts, and officials in 10 selected states through site visits and phone conferences. These 10 states accounted for nearly half of all TANF spending for non-cash services in fiscal year 2010. Nationwide, states have used Temporary Assistance for Needy Families (TANF) block grant funds not only to provide cash assistance, but also to provide noncash services, such as job preparation and work supports for low-income families and aid for at-risk children. Among our 10 selected states, job preparation and work activities included help with the job search process, skills training, and subsidized employment. California generally provides such services to families receiving cash assistance while the other nine states extend some of them to other low-income families. Florida and Utah provide such services in coordination with the Workforce Investment Act one-stop center system. Work supports among these states mainly include child care subsidies for low-income working families. Services for at-risk children include child welfare activities, such as child abuse hotlines, investigative and legal services, child protection, and preventive services. TANF has allowed states to make funding decisions based on state priorities, particularly as cash assistance caseload declines freed up funds for non-cash services. However, according to officials in three states GAO reviewed, state decisions to fund a broad array of services can create tensions and tradeoffs between meeting cash assistance and other service needs. TANF's accountability framework provides incomplete information on how states' non-cash services are contributing to TANF purposes. Plans that states submit to the Department of Health and Human Services (HHS) outlining how they intend to run their TANF programs provide limited information on goals and strategies for non-cash services. In addition, past HHS reports and selected states identified some weaknesses in TANF expenditure reporting. For example, officials in one selected state noted that the use of TANF funds for child welfare services is not clearly identifiable in HHS's reporting categories for TANF expenditures. HHS is working to revise reporting categories, with a goal of implementing them for fiscal year 2014. No reporting requirements currently mandate performance information specifically on families receiving non-cash services or TANF's role in filling needs in prominent spending areas for TANF funds, like child welfare. These reporting gaps limit the information available for oversight of TANF block grant funds by HHS and Congress. Generally, HHS has limited authority to impose new TANF reporting requirements on states unless directed by Congress. While GAO's previous work on grant design highlights several features of grants, such as broad and varied purposes, that pose challenges to the development of performance information and measures, it also lays out accountability principles that can help address these issues for TANF. Congress may wish to consider ways to improve reporting and performance information so that it encompasses the full breadth of states' uses of TANF funds. GAO recommends that HHS develop a detailed plan with timelines to revise reporting categories for TANF expenditures. In its response, HHS provided some timeframes that we added to the report, although we maintain a more detailed plan will help HHS monitor its progress in completing this effort.
In December 1991, after more than 70 years of Communist rule, the Soviet Union came to an abrupt end, and the 12 new independent states emerging from the breakup started their transition to market-based democracies (see fig. 1 for a map of the 12 states). According to a 1993 study prepared for the U.S. Agency for International Development, and legal experts, these countries inherited legal systems that were, in many respects, the antitheses of the rule of law. According to this study, under the Soviet Union, law was created by an elite without general participation and was designed to further the power of the state, not to limit it. In addition, the law was applied on an ad hoc basis to achieve political goals. Private economic activity was discouraged, and the Soviet Union lacked the basic legal framework needed to facilitate and regulate private enterprise. All the actors in the legal system were, to one degree or another, under the control of the Communist party and at the service of the state. The state procuracy (prosecutor) oversaw criminal investigations and prosecutions in a heavy-handed manner, affording defendants few, if any, rights. Law enforcement agencies were inexperienced in addressing many types of crimes that would come to plague the region and threaten other countries, such as organized crime and drug trafficking. The government and the Communist party controlled both access to legal education and the licensing of lawyers. With its tradition of unpublished and secret administrative regulation, the state also limited public access to the legal system and legal information; as a result, citizens regarded the legal system with suspicion and questioned its legitimacy, according to the USAID-sponsored study. According to legal experts, courts in the Soviet Union were weak, lacked independence, and enjoyed little public respect.Administration of justice was poorly funded, facilities were not well maintained, and judges were poorly paid and received very little, if any, training. For fiscal years 1992 through 2000, the United States has obligated at least $216 million in assistance to help establish the rule of law in the new independent states of the former Soviet Union. For fiscal years 1998 through 2000, U.S. assistance under this program has averaged about $29 million per year. Table 1 illustrates the estimated distribution of this funding among these countries. Over half of the funding has been devoted to four countries where USAID has designated rule of law development as a strategic objective: Russia, Ukraine, Georgia, and Armenia. While the remaining countries have received some rule of law assistance, USAID has not made rule of law development a strategic objective in these countries. According to USAID and State, the U.S. rule of law assistance program, along with other programs of U.S. assistance to Central and Eastern Europe and the new independent states, was envisioned by the U.S. government to be a short-term program to jump-start the countries of this strategically critical region on their way to political and economic transition. Many other foreign and U.S.-based donors have provided rule of law assistance to the new independent states. For example, the World Bank has a program to lend Russia $58 million for legal system reform. Many Western European countries, the European Union, and private international donors, such as the Ford Foundation and the Soros Foundation, have also financed projects similar to those funded by the United States. Funding data for these activities were not readily available, and we did not attempt to determine the value of all of this assistance, given the difficulty involved in identifying the many different efforts and their costs. Fostering sustainable results through U.S. assistance projects is critical to the impact and ultimate success of this program. According to USAID’s strategic plan, promoting sustainable development among developing and transitional countries contributes to U.S. national interests and is a necessary and critical component of America’s role as a world leader. Strengthening the rule of law is a key component of USAID’s strategic goal of building sustainable democracies. The right conditions for development can only be created by the people and governments of developing and transitional countries, according to USAID. In the right settings, however, American resources, including its ideas and values, can be powerful catalysts enabling sustainable development. Achieving sustainable project results is especially important in areas where development is likely to be a difficult and long-term process, such as establishing the rule of law in this region. Almost all U.S. funding for rule of law assistance in the new independent states of the former Soviet Union, authorized under the Freedom Support Act of 1992, is appropriated to USAID and the Department of State. However, a significant amount of assistance has been allocated to the Departments of Justice and Treasury through interagency fund transfers from USAID and State. As shown in figure 2, from fiscal years 1992 through 2000, USAID has administered about 49 percent of program funding for rule of law activities in this region, while the Departments of Justice, State, and the Treasury have administered about 51 percent. These agencies provide assistance under this program through a variety of means, primarily in the form of goods and services to governmental and nongovernmental organizations and individuals. For some projects, such as law enforcement training, U.S. government agencies provide the assistance directly. For other projects, such as institutional development projects, the agencies distribute aid to beneficiaries through contracts, cooperative agreements, and grants to nongovernmental organizations, private voluntary organizations, and firms located in the United States or overseas. Assistance is generally not provided directly to foreign governments through cash disbursements. The United States has taken a broad approach to providing rule of law assistance. The assistance approach generally incorporates five elements: (1) developing a legal foundation for reform, (2) strengthening the judiciary, (3) modernizing legal education, (4) improving law enforcement practices, and (5) increasing civil society’s access to justice. (See fig. 3 for an illustration of these elements.) Developing a legal foundation for reform: Projects under this element have focused on assisting governments in passing legislation that would provide the legal basis for a transparent and predictable administration of justice system, including a post-communist constitution, a law establishing an independent judiciary, and post-Soviet-era civil and criminal codes and procedures. This element also includes efforts to strengthen the legislative process. Strengthening the judiciary: Projects under this element involve strengthening the independence of the judiciary and the efficiency and effectiveness of the courts, including increasing the expertise and status of judges and supporting the development of judicial institutions. Modernizing legal education: Projects under this component have concentrated on improving legal education available to both students and practitioners of the law, including modernizing law school curricula, establishing legal clinics for law students, and developing indigenous continuing legal education opportunities for practicing lawyers and other legal professionals. Improving law enforcement practices: Projects under this component have been aimed at improving law enforcement practices by training procurators and other law enforcement personnel in modern techniques of criminal investigation and prosecution that are effective yet respectful of citizens’ civil rights. Increasing civil society’s access to justice: Projects under this component have targeted the participation of nongovernmental organizations and the general population in the judicial sector to make legal information and access to justice affordable and realizable. In general, USAID implements assistance projects primarily aimed at development of the judiciary, legislative reform, legal education, and civil society. The Departments of State, Justice, and the Treasury provide assistance for criminal law reform and law enforcement projects. Though the program has generally included these elements throughout its existence, it has evolved over the years in response to lessons learned about effectiveness and to adapt to emerging constraints. For example, in the earlier years of the program, the United States emphasized promotion of western methods and models for reform. As it became clear that host country officials often did not consider these to be appropriate to their local contexts, USAID projects began to foster the development of more “home-grown” reforms. Also, in Russia, the United States has placed increasing emphasis on regional projects outside of Moscow instead of projects aimed at the central government, as regional officials were often more receptive to reform. Establishing the rule of law in the new independent states of the former Soviet Union has proven to be an extremely complex and challenging task that is likely to take many years to accomplish. U.S. assistance has had limited results, and the sustainability of those results is uncertain. In each of the five elements of the rule of law assistance program, the United States has succeeded in exposing these countries to innovative legal concepts and practices that could lead to a stronger rule of law in the future. However, we could not find evidence that many of these concepts and practices have been widely adopted. At this point, many of the U.S.- assisted reforms are dependent on continued donor funding in order to be sustained. Despite some positive developments, the reform movement has proceeded slowly overall, and the establishment of the rule of law in the new independent states remains elusive. A key focus of the U.S. rule of law assistance programs has been the development of a legal foundation for reform of the justice system in the new independent states. (See fig. 4 for activities involving the legislative foundation of the rule of law assistance program.) The United States has helped several of these countries adopt new constitutions and pass legislation establishing independent judiciaries and post-communist civil and criminal codes and procedures, as well as other legislation that supports democratic and market-oriented reform. Despite considerable progress in a few countries, major gaps persist in the legal foundation for reform, particularly in such countries as Ukraine, a major beneficiary of U.S. rule of law assistance, according to U.S. and foreign government officials we interviewed. U.S. projects in legislative assistance have been fruitful in Russia, Georgia, and Armenia, according to several evaluations of this assistance, which point to progress in passing key new laws. For example, according to a 1996 independent evaluation of the legal reform assistance program, major advances in Russian legal reform occurred in areas that USAID programs had targeted for support, including the passage of a new civil code and a series of commercial laws. This legislation included the 1996 Russian Federation Constitutional Law on the Judicial System and the 1998 Law on the Judicial Department, creating a more independent judicial branch within the Russian government. The Department of Justice provided technical assistance and advice to lawmakers in the passage of Russia’s new criminal code as well, which, according to Justice, formally eliminated the Soviet laws against private economic activity, free speech, and political dissent. Georgia has also passed many key pieces of legislation with U.S. assistance in the areas of improving the judiciary, the procuracy (the prosecutor), the media, and the criminal justice process, according to another evaluation we reviewed. In Armenia, as well, according to a 2000 USAID-sponsored evaluation, important legislation was adopted as a result of U.S. government assistance, including a new civil code, criminal procedure code, Law on the Judiciary, Law on the Status of Judges, Law on the Execution of Court Judgments, Law on Advocates, and a universal electoral code. The results of assistance in this area are not easy to discern in all cases. For example, a 1999 USAID- sponsored evaluation of a portion of the legislative assistance and policy advice provided to Russia in the mid- to late 1990s indicates that the impact of this aid could not be independently verified. U.S. projects to help countries achieve passage of critical legal reform legislation have not always been successful, and key legislation is lacking in several new independent states. Despite providing assistance to reform legislation, Ukraine has not yet passed any new laws on the judiciary or new criminal, civil, administrative, or procedure codes since a new constitution was passed in 1996. In Russia, a revised criminal procedure code, a key component of the overall judicial reform effort, has still not been adopted by the government, despite extensive assistance from the Department of Justice in developing legislative proposals. Furthermore, a major project in Ukraine to establish sustainable mechanisms for developing reform-oriented legislation in the future has not yet been successful. One component of the USAID assistance program in Ukraine has been advancing parliamentary expertise and institutions to provide public policy analysis that will result in a more active, informed, and transparent parliament. However, according to U.S., foreign government, and private sector officials we interviewed, parliamentary committees are still weak, and parliamentary procedures for conducting hearings and related oversight activities have not been institutionalized. The vast majority of reforms still stem from the executive, which holds a disproportionate share of power and influence over the judicial and legislative branches of government. The second key element in the U.S. government’s rule of law program has been to foster an independent judiciary with strong judicial institutions and well-trained judges and court officers who administer decisions fairly and efficiently. (See fig. 5 for activities under the judicial pillar of the rule of law assistance program.) The United States has contributed to greater independence and integrity of the judiciary by supporting key new judicial institutions and innovations in the administration of justice and by helping to train or retrain many judges and court officials. However, U.S. efforts we reviewed to help retool the judiciary have had limited impact so far. Judicial training programs have not yet been developed by the governments with adequate capacity to reach the huge numbers of judges and court officials who operate the judiciaries in these nations, and courts still lack full independence, efficiency, and effectiveness. The United States has provided technical support and equipment to help establish and strengthen a variety of national judicial institutions. Though we could not verify the impact of this assistance on the effectiveness of their operations, representatives of the following institutions in Russia credit U.S. support for helping them enhance the independence and integrity of the judiciary. A Supreme Qualifying Collegium in Russia: With the help of training, information, and equipment provided by USAID, this institution, comprised solely of judges, is better equipped to oversee the qualification and discipline of judges, providing greater independence from political influence in court affairs. Judicial Department of the Supreme Court in Russia: USAID provided training, educational materials, and other technical assistance to strengthen this new independent institution, created in 1998 to assume the administrative and financial responsibility for court management previously held by the Ministry of Justice. The United States has also helped support the following innovations in the administration of the judiciary that appear to help increase the judiciary’s integrity and independence. Qualifying examinations in Georgia: With extensive U.S. assistance by USAID contractors, an objective judicial qualifying examination system was introduced in 1998. This step has resulted in the replacement of some poorly qualified judges with certified ones. Georgia has repeated the exam several times with decreasing amounts of technical assistance from the United States. Jury trials in Russia: With training and educational material on trial advocacy, judges are now presiding over jury trials in 9 of Russia’s 89 regions for the first time since 1917. Although the jury trial system has not expanded beyond a pilot phase, administration of criminal justice has been transformed in these regions—acquittals, unheard of during the Soviet era, are increasing under this system (up to 16.5 percent of all jury trials by the most recent count). At a broader level, the United States has attempted to strengthen the integrity of the judiciary by supporting a variety of educational projects for legal professionals within the court system. In particular, USAID has sponsored training and conferences and has provided educational materials for judges, bailiffs, and administrators, raising their understanding of new and existing laws and improving their knowledge and skills in operating efficient and effective court systems. According to a major aid contractor, training on the bail law in Ukraine sponsored by the Department of Justice has increased awareness among courts of the alternatives to lengthy pretrial detention for criminal defendants. The United States has also helped develop manuals that provide practical information for judges and bailiffs on how to conduct their jobs. Historically, few books like these have been widely available, which has seriously limited the development of professionalism in these legal careers. New teaching methods were introduced through U.S.-sponsored conferences. For example, according to training officials in the Russian Commercial Court, whereas conferences for their judges had traditionally been based mostly on lectures, U.S.-sponsored conferences stimulated discussions and were more interactive, included more probing questioning of the concepts presented, and provided a greater exchange of ideas. By all accounts, the information that the United States has provided on modern legal concepts and practices has been highly valued by its recipients. However, efforts to foster sustainable new methods for training judges have had limited results, and the long-term viability of U.S.-sponsored improvements is questionable. In Ukraine, projects aimed at establishing modern judicial training centers have had very limited success. The two centers we visited that had been established with USAID assistance were functioning at far below capacity. One was only used for official judicial training for half a year and later for training classes financed by international donors. The other center had been dismantled, and the training equipment provided by USAID was dispersed to regional courts. In Russia, although training facilities have been in place for some time, their capacity for training judges is extremely limited. For example, with its current facilities, the Russian Court of General Jurisdiction can train each of its 15,000 judges only about once every 10 years. Plans for the development of a major new judicial training academy have not yet been implemented. Where training centers were already in place, some innovative training techniques introduced through U.S. assistance have not been institutionalized. For example, the training organizations we visited in Russia praised the new practical manuals developed with U.S. assistance, but they did not plan to print subsequent editions. Also, although videotape-based training had been piloted with U.S. assistance for the Russian Commercial Court to train judges in far-flung regions, no further videotaped courses have been produced by the court. Despite progress in recent years, fully independent, efficient, and effective judiciaries have not yet been established. For example, according to a senior U.S. official responsible for Department of Justice programs in Russia, much of the former structure that enabled the Soviet government to control judges’ decisions still exists, and Russians remain suspicious of the judiciary. Furthermore, according to the State Department’s 1999 Human Rights Report, the courts are still subject to undue influence from the central and local governments and are burdened by large case backlogs and trial delays. Also, according to a 2000 USAID program document, serious problems with the court system in Russia continue to include the lack of adequate funding, poor enforcement of court judgments, and negative public attitudes toward the judiciary. In Ukraine, according to Freedom House, a U.S. research organization that tracks political developments around the world, and U.S. and Ukrainian officials and experts we interviewed, relatively little judicial reform has taken place, other than the adoption of a new constitution in 1996 and the establishment of a Constitutional Court for its interpretation. To a large extent, the ethos and practices of the Soviet political/legal system remain in the Ukrainian legal community, according to a 1999 USAID-sponsored assessment. The justice system, in which an estimated 70 percent of sitting judges in Ukraine were appointed during the Soviet era, continues to be marked by corruption and inefficiency and limited protection of criminal defendants’ rights. Freedom House recently reported that the judiciary is not yet operating as an independent branch of government. Furthermore, according to Freedom House, local judges are subject to influence and requests for particular rulings from government officials who financially support court operations. According to the USAID- sponsored assessment, courts suffer from poor administrative procedures, which nurture corruption, inappropriate influence of judges, a lack of transparency, and waste. Moreover, the courts are unable to enforce their decisions, particularly in civil cases. This is a key constraint to the development of the rule of law in Ukraine, as it results in a loss of public confidence in the courts, according to the assessment report. Human rights advocates told us that legislated mandates for timely trials and set standards for prison conditions are often violated and result in extended detentions under poor conditions. USAID documents we reviewed indicate that significant judicial reform is still needed in other countries as well. In Georgia, where the judicial reform process is perceived by USAID as being more advanced, most criminal trials continue to follow the Soviet model and, in many cases, prosecutors continue to wield disproportionate influence over outcomes, according to the State Department’s Human Rights Report. Also, local human rights observers report widespread judicial incompetence and corruption, according to the report. In Armenia, State reports that although the judiciary is nominally independent, in practice courts are subject to pressure from the executive branch and to corruption, and prosecutors still greatly overshadow defense lawyers and judges during trials. According to USAID, a 1999 opinion poll showed that in Armenia only 20 percent of the population believe that court decisions are rendered fairly and in keeping with the law. The third element of the U.S. assistance program has been to modernize the system of legal education in the new independent states to make it more practical and relevant. (See fig. 6 for activities under the legal education pillar of the rule of law assistance program.) The United States has sponsored a variety of special efforts to introduce new legal educational methods and topics for both law students and existing lawyers. However, the impact and sustainability of these initiatives are in doubt, as indigenous institutions have not yet demonstrated the ability or inclination to support the efforts after U.S. and other donor funding has ceased. The United States has provided some opportunities for law students and practicing lawyers to obtain useful new types of training. For instance, in an effort to supplement the traditionally theoretical approach to legal education in the new independent states of the former Soviet Union, USAID has introduced legal clinics into several law schools throughout Russia and Ukraine. These clinics allow law students to get practical training in helping clients exercise their legal rights. They also provide a service to the community by facilitating access to the legal system by the poor and disadvantaged. With the training, encouragement, and financing provided by USAID, there are about 30 legal clinics in law schools in Russia and about 20 in Ukraine. USAID has also provided a great deal of continuing education for legal professionals, particularly in the emerging field of commercial law. This training was highly regarded by the participants, according to a 1999 USAID-sponsored evaluation of this project in Russia. Traditionally, little of this type of training was available to lawyers in the former Soviet Union. USAID has included some design features in its projects intended to make them sustainable. Indigenous experts are increasingly used to provide the training as a way of making it more applicable in the local context and thus more sustainable, as trainers would remain in the country. Also, sustainability is enhanced by USAID’s approach of training other trainers to perpetuate the teaching of trial advocacy skills and commercial law. According to the 1999 USAID-sponsored evaluation and an aid contractor we spoke to, materials on trial advocacy developed with U.S. assistance continue to be used in indigenous educational programs in Russia. The United States, through long-term exchanges and partnership activities administered initially by the U.S. Information Agency and then by the Bureau of Educational and Cultural Affairs at the State Department, also brought young students, professionals, and faculty members to the United States to study U.S. law and legal education in depth. University partnerships also paired law schools in the United States and the new independent states to promote curriculum development and reform. We have observed some results from exchanges such as these: for example, the dean of the St. Petersburg State University Law School told us that his U.S.-funded visit to the United States inspired him to undertake major reforms at his institution, including the introduction of more practical teaching methods. Despite the introduction of some positive innovations, however, U.S. assistance in this area has fallen far short of reforming legal education in the new independent states on a large scale. According to USAID- sponsored evaluations and project officials we spoke to, U.S. assistance has not been successful in stimulating reform in formerly Soviet law schools. Most law schools have not adopted the new, practice-oriented curricula that USAID has advocated and instead continue the traditional emphasis on legal theory. For example, in Ukraine, the emphasis in law school curricula continues to be on public rather than private law, and law students are taught little on subjects such as enterprises, contracts, real and personal property, consumer law, intellectual property, banking law, or commercial law. Also ignored are subjects relating to government regulation of businesses. As a result, students are not taught many skills important to the practice of law, including advocacy, interviewing, case investigation, negotiation techniques, and legal writing. In the area of using legal clinics to provide practical education, the impact of USAID assistance has been minor and sustainability is not yet secure. Due to the small number of faculty advisers willing to supervise the students’ work, these clinics can only provide practical experience to a fraction of the law student population. While clinics appear to be increasing in popularity, not all universities routinely fund them or give course credit to participating students. In Ukraine, the United States has helped fund the establishment of a Ukrainian Law School Association to press for reforms in the Ukrainian legal education system, but this organization has remained relatively inactive, according to a major USAID contractor involved in this program. Also, a 2000 USAID-sponsored evaluation of rule of law projects in Armenia concluded that the considerable investment in that country’s largest law school has not resulted in the intended upgrading and modernizing of curricula and teaching methodology. In the area of continuing legal education as well, it is unclear whether the new learning opportunities that the United States has been providing to legal professionals are sustainable over the long term. We could identify few organizations that routinely sponsor the types of training and conferences and print the published materials that the United States had initially provided. In Russia, a major aid contractor we met with involved in developing legal texts and manuals for USAID in Russia could not identify any organizations that were engaged in reprinting these publications without U.S. or other donor financing. The private Ukrainian organization that has provided most of Ukraine’s continuing legal education is dependent primarily on U.S. funding to operate. The United States has largely been unsuccessful at fostering the development of legal associations, such as bar associations, national judges associations, and law school associations, to carry on this educational work. U.S. officials had viewed the development of such associations as key to institutionalizing modern legal principles and practices and professional standards on a national scale as well as serving as conduits for continuing legal education for their members. But they have not become the active, influential institutions that the United States had hoped. In Armenia, according to a 2000 USAID-sponsored study, none of the nongovernmental organizations that had been supported by USAID were financially viable in carrying out their continuing legal education goals. Sustainability is “not in the picture for the immediate future,” as the organizations were dependent on international donor assistance, according to the study. The fourth component of the U.S. government’s rule of law program involves introducing modern criminal justice techniques to local law enforcement organizations. (See fig. 7 for activities under the law enforcement pillar of rule of law assistance programs.) As part of this effort, the United States has provided many training courses to law enforcement officials throughout the new independent states of the former Soviet Union, shared professional experiences through international exchanges and study tours, implemented several model law enforcement projects, and funded scholarly research into organized crime. These programs have fostered international cooperation among law enforcement officials, according to the Department of Justice. However, we found little evidence that the new information disseminated through these activities has been routinely applied in the practice of law enforcement in the new independent states. Thus the impact and sustainability of these projects are unclear. U.S. law enforcement agencies, such as the Federal Bureau of Investigation, the U.S. Customs Service, and the Drug Enforcement Administration, have sent dozens of teams of experts to train their counterparts in the new independent states of the former Soviet Union on techniques for combating a wide variety of domestic and international crimes. The United States has also sponsored the attendance of their counterparts at U.S. training academies and the International Law Enforcement Academy in Budapest, Hungary. According to State and Justice, this training is intended not only to strengthen the law enforcement capabilities and, hence, the rule of law in these countries, but also to increase cooperation between law enforcement agencies in the United States and the new independent states in investigating and prosecuting transnational crimes. U.S. law enforcement officials we spoke to have reported that, as a result of these training courses, there is a greater appreciation among Russians and Ukrainians of criminal legal issues for international crimes of great concern in the United States, such as organized crime, money laundering, and narcotics and human trafficking. They have also reported a greater willingness of law enforcement officials to work with their U.S. and other foreign counterparts on solving international crimes. According to a senior researcher conducting a State Department-funded study on the effects of law enforcement training, students participating in international police training funded in part by the U.S. government are significantly more willing to share information on criminal investigations with U.S. or other national law enforcement agencies than law enforcement officials that have not participated. Furthermore, according to Justice, there has been an increasing number of requests from the new independent states for bilateral law enforcement cooperation with the United States and a number of joint investigations of organized crime, kidnapping, and baby adoption scams. However, the impact and sustainability of this training in building the law enforcement capabilities of the new independent states are unclear. We found little evidence in our discussions with senior law enforcement officials in Russia and Ukraine that U.S. techniques taught in these training courses were being routinely applied by their organizations. In some cases, training officials cited the use of U.S.-provided training materials by some instructors or as reference materials in their libraries, yet none identified a full-scale effort to replicate or adapt the training for routine application in their training institutions. Furthermore, we identified only two studies providing data on the application of U.S. law enforcement training, neither of which conclusively demonstrates that U.S. techniques have been widely embraced by training participants. According to a researcher we interviewed who has been evaluating U.S.-sponsored training programs under a grant from State, techniques taught at the International Law Enforcement Academy, which is partially funded by State, have had limited application in day-to-day policing activities of participants. About 20 percent of training participants surveyed reported that they frequently use the techniques they learned in academy training courses in their work, according to his research. According to an evaluation of U.S. law enforcement training conducted by the Russian Ministry of Internal Affairs, about 14 percent of Russian law enforcement officials surveyed indicated they have used the American experience introduced in this training in their practical work. According to Justice, this level of application of U.S. techniques suggests significant impact from U.S. training, and application and impact are likely to grow in time as the merit of these techniques become evident with use. However, due to limitations in the data available from these studies we were unable to validate or dispute Justice’s assertions about the efficacy of this training. The United States has funded several model law enforcement projects in Russia and Ukraine to help communities and law enforcement authorities establish community policing programs and to address the problems of domestic violence and human trafficking more effectively. Some of these projects appear to have had some impact in the local communities where they have been implemented. For example, according to the State Department, in one Russian city, the number of arrests for domestic violence has more than doubled in one year as a result of a U.S.-funded model project. However, such projects are still in the early stages of implementation, and we could not find evidence that the new practices introduced by the United States have yet been adopted on a wider scale in Russia or Ukraine. Research on organized crime in Russia and Ukraine, sponsored by USAID and Justice, has provided some information that may potentially serve as a foundation for developing new methods for fighting this type of crime. Officials at U.S.-funded research centers told us that their researchers helped develop a methodology for investigating and prosecuting corruption and organized crime that has been incorporated into some law school curricula. However, although project officials we spoke to asserted that the knowledge and analysis produced by the centers were being used, they could not determine how this research had actually been applied by law enforcement organizations in the new independent states. To date we found no evidence that these programs have led to sustainable and meaningful innovations in fighting organized crime in Russia and Ukraine. The fifth element of rule of law assistance program is the expansion of access by the general population to the system of justice. (See fig. 8 for activities conducted under the civil society pillar of the rule of law assistance program.) In both Russia and Ukraine, the United States has fostered the development of a number of nongovernmental organizations that have been active in promoting the interests of groups, increasing citizens’ awareness of their legal rights, and helping poor and traditionally disadvantaged people gain access to the courts to resolve their problems. While these projects have contributed to a greater demand for justice, for the foreseeable future many will continue to rely on donor support, since they face difficulties in obtaining adequate funds domestically to continue operations. U.S. projects have led to greater access by citizens to the courts. The United States has supported a variety of organizations devoted to protecting the legal rights of many different segments of society, including small business owners, the handicapped, victims of domestic violence, labor unions and individual workers, poor and displaced people, and homeowners and tenants. In Russia, the proliferation of such groups may have contributed, at least in small part, to the significant increase in the use of the courts—the number of civil cases in Russian courts increased by about 112 percent between 1993 and 1997, according to the statistics of the Russian Supreme Court. For example, in Russia, USAID has sponsored a project that has helped improve access to the legal system for trade unions and their members. According to the project manager, Russian lawyers supported by this project brought litigation in the Russian Constitutional and Supreme Courts on behalf of workers, which has led to changes to three national laws, bolstering the legal rights of millions of workers. In addition, in Ukraine, private citizens are increasingly taking their disputes on environmental matters to the courts and prevailing in their causes with the help of USAID-funded organizations. At least three active environmental advocacy organizations have emerged with the sponsorship of USAID and other donors to provide legal advice and representation. Some of these organizations have brought important lawsuits on behalf of citizens, resulting in legal decisions with far-reaching legal implications. For example, a group of more than 100 residents in one local community obtained a judgment against the Ukrainian government for violating zoning laws on the location of a city dump and won demands that the dump be constructed at a different location in accordance with zoning laws, according to USAID. Despite their high level of activity in recent years, these organizations still face questionable long-term viability. Most nongovernmental organizations we visited were dependent upon foreign donor contributions to operate. While some continued to function even after U.S. funding ceased, they often operated at a significantly reduced level of service. Some organizations received office space from the government, collected membership fees, and relied on the work of volunteers, but very few indicated that they received a large portion of their funding from domestic sources. Thus, sustainability of even some of the most accomplished organizations, such as the Ukrainian environmental advocacy organizations, remains to be seen. These organizations had been largely supported by USAID for several years and have only recently been forced to operate more independently. In Armenia, according to a 2000 USAID- sponsored evaluation, none of the nongovernmental organizations that had been supported by USAID were financially viable in carrying out their public awareness goals. The evaluation found that these organizations’ activities were not sustainable in the long term since they were dependent on international donor assistance. Despite nearly a decade of work to reform the systems of justice in the new independent states of the former Soviet Union, progress in establishing the rule of law in the region has been slow overall, and serious obstacles remain. As shown in table 2, according to Freedom House, the new independent states score poorly in the development of the rule of law, and, as a whole, are growing worse over time. These data, among others, have been used by USAID and the State Department to measure the results of U.S. development assistance in this region. In the two new independent states where the United States has devoted the largest amount of rule of law funding—Russia and Ukraine—the rule of law is slightly better than average for the region, according to Freedom House scores. However, the scores show that the reform process remains slow and the rule of law, as defined by these indicators, has deteriorated in recent years. The scores have improved in only one of the four countries (Georgia) in which USAID has made the development of the rule of law one of its strategic objectives and the United States has devoted a large portion of its rule of law assistance funding. Three factors have constrained the impact and sustainability of U.S. rule of law assistance: (1) a limited political consensus on the need to reform law and institutions, (2) a shortage of domestic resources to finance many of the reforms on a large scale, and (3) a number of shortcomings in U.S. program management. The first two factors, in particular, have created a very challenging climate for U.S. programs to have major, long-term impact in these states, but have also underscored the importance of effective management of U.S. programs. In key areas in need of legal reform, U.S. advocates have met some steep political resistance to change. In Ukraine and Russia, lawmakers have not been able to agree to pass critical legal codes upon which reform of the judiciary must be based. In particular, Ukrainian government officials are deadlocked on legislation reforming the judiciary, despite a provision in the country’s constitution to do so by June 2001. Numerous versions of this legislation have been drafted by parties in the parliament, the executive branch, and the judiciary with various political and other agendas. Lack of progress for this legislation has stymied reforms throughout the justice system. In Russia’s Duma (parliament), where the civil and the criminal codes were passed in the mid-1990s, the criminal procedure code remains in draft form. According to a senior Justice official, Russia is still using the autocratic 1963 version of the procedure code that violates fundamental human rights. This official told us that the Russian prosecutor’s office is reluctant to support major reforms, since many would require that institution to relinquish a significant amount of the power it has had in the operation of the criminal justice system. While U.S. officials help Russian groups to lobby for legislative reforms in various ways, adoption of such reforms remain in the sovereign domain of the host country. In the legal education system as well, resistance to institutional reform has thwarted U.S. assistance efforts. While some legal education officials we spoke with advocate more modern and practical teaching methods, legal education remains rigidly theoretical and outmoded by western standards. USAID officials in Russia told us that Russian law professors and other university officials are often the most conservative in the legal community and the slowest to reform. A USAID-sponsored assessment of legal education in Ukraine found that there was little likelihood for reform in the short term due to entrenched interests among the school administration and faculty who were resisting change. Georgia also suffers from deeply seated barriers to legal education reform, such as systemic corruption in admissions and grading, according to the 1999 USAID-sponsored evaluation. Furthermore, little consensus could be reached among legal professionals to overcome cultural, regional, and professional barriers to form effective national associations, according to U.S. officials and contractors we spoke with. For example, according to one law school dean we interviewed, efforts to establish a national law school association in Russia were met with resistance from state legal educational institutions in Moscow, which insisted on forming an alternative local association. Policymakers have not reached political consensus on how or whether to address the legal impediments to the development of sustainable nongovernmental organizations. This would include passing laws that would make it easier for these organizations to raise domestic funds and thus gain independence from foreign donors. For example, in Ukraine, according to a 1999 USAID report and Ukrainian officials we interviewed, the most important issues for nongovernmental organization development that need to be addressed by new legislation are granting nongovernmental organizations special tax status to enable them to raise funds for their activities and to provide tax incentives for private organizations or individuals to donate funds. Moreover, administrative acts by government agencies in Ukraine allow the government to decrease the scope of nongovernmental organizations, and some nongovernmental organizations, particularly those involved in citizen advocacy efforts, face numerous obstacles from tax authorities and other administrative agencies. In Russia, according to the USAID report, taxes are collected without distinguishing between nonprofit and profit-making enterprises, and legislation that promotes significant tax incentives is unlikely to be passed in the near future because of the government’s critical need to raise revenues. Historically slow economic growth in the new independent states has meant limited government budgets and low wages for legal professionals and thus limited resources available to fund new initiatives. While Russia has enjoyed a recent improvement in its public finances stemming largely from increases in the prices of energy exports, public funds in the new independent states have been constrained. Continuation or expansion of legal programs initially financed by the United States and other donors has not been provided for in government budgets, as illustrated by the following examples. In Ukraine, according to officials of the Supreme Court, the government could only afford to fund operations of the court’s judicial training center for 6 months in the year 2000. In the Russian Commercial Court, administrators explained to us that although the donated computer network funded by USAID was very helpful, the court did not have the funds to extend it to judges outside of the court’s headquarters building in Moscow. The system of jury trials in Russia could not be broadened beyond 9 initial regions, according to a senior judiciary official, because it was considered too expensive to administer in the other 89 regions. According to a senior police official we spoke to in Ukraine, police forces often lack funds for equipment, such as vehicles, computers, and communications equipment, needed to implement some of the law enforcement techniques that were presented in the U.S.-sponsored training. In addition, government ability or commitment to funding innovative new training and other improvements for the judiciary also appeared weak in Georgia, where the government has not been able to pay judges their promised salaries in a timely manner. Nongovernmental organizations we visited said that it was difficult to raise funds from domestic sources to continue the advocacy, educational, and legal services programs that had initially been financed by the United States and other donors. For example, they indicated that while lawyers and other legal professionals valued the educational materials and opportunities offered through U.S. assistance, they generally could not afford to pay for the courses and materials privately. U.S. agencies implementing the rule of law assistance program have not always managed their projects with an explicit focus on achieving sustainable results. Our review of project documentation and our discussions with senior U.S. government officials indicate limited efforts were made to (1) develop and implement strategies to achieve sustainable results and (2) monitor projects results over time to ensure that sustainable impact was being achieved. These are important steps in designing and implementing development assistance projects, according to guidance developed by USAID. According to USAID guidance for planning assistance projects, project descriptions should define the strategies and processes necessary to achieve specific results, both in terms of immediate outputs and longer- term outcomes. We found that, in general, USAID projects were designed with strategies for achieving sustainability, including assistance activities intended to develop new and existing indigenous institutions to adopt the concepts and practices USAID was promoting. However, at the Departments of State, Justice, and the Treasury, rule of law projects we reviewed often did not establish specific strategies for achieving sustainable development results. In particular, the law enforcement- related training efforts we reviewed were generally focused on achieving short-term objectives, such as conducting training courses or providing equipment and educational materials; they did not include an explicit approach for meeting longer-term objectives, such as promoting sustainable institutional changes and reform of national law enforcement practices. According to senior U.S. embassy officials in Russia and Ukraine, these projects rarely included follow-up activities to help ensure that the concepts taught were being institutionalized or otherwise having long-term impact. For example, according to the U.S. Resident Legal Advisor in Russia, U.S. agencies’ training efforts were intended to introduce new law enforcement techniques, but no effort was made to reform the law enforcement training curriculum so that the techniques would continue to be taught after the U.S. trainers left the country. Federal Bureau of Investigation officials we spoke to indicated that their training courses in the new independent states rarely took a “train the trainer” approach aimed at providing training that is likely to be replicated by indigenous law enforcement staff. One senior Justice official described the training as “lobbying” to convince key law enforcement officers of the importance or utility of the techniques being taught in hopes that they would someday be adopted. USAID guidance also calls for establishing a system for monitoring and evaluating performance and for reporting and using performance information. Developing and monitoring performance indicators is important for making programmatic decisions and learning from past experience, according to USAID. However, we did not find clear evidence that U.S. agencies systematically monitor and evaluate the impact and sustainability of the projects they implemented under the rule of law assistance program. We found that the Departments of State, Justice, and the Treasury have not routinely assessed the results of their rule of law projects. In particular, according to U.S. agency and embassy officials we spoke to, there was usually little monitoring or evaluation of the law enforcement training courses after they were conducted to determine their impact. U.S. law enforcement agencies that have implemented training programs report to State on each training course but do not assess the extent to which the techniques and concepts they taught have had a broader impact on law enforcement in the countries where they conduct training. To date, State has funded only one independent evaluation of the law enforcement training activities. According to Justice, it evaluates the course curriculum at the International Law Enforcement Academy on a regular basis to help ensure that it is relevant to its participants and of high quality. In addition, Justice conducts some indirect measurement of long-term effectiveness by discussing the usefulness of training with selected participants months or years after they have completed the course. However, these evaluations do not systematically assess the longer-term impact and sustainability of the training and do not cover a large portion of the training that Justice conducts. Although USAID has a more extensive process for assessing its programs, we found that the results of its rule of law projects in the new independent states of the former Soviet Union were not always apparent. The results of most USAID projects we reviewed were reported in terms of project outputs instead of impact and sustainability. For 6 of the 11 major projects we reviewed in Russia and Ukraine, available project documentation indicated that project implementers reported project results almost exclusively in terms of outputs. These outputs include the number of USAID-sponsored conferences or training courses held, the number and types of publications produced with project funding, or the amount of computer and other equipment provided to courts. Short-term measures and indicators alone do not enable USAID to monitor and evaluate the sustainability and overall impact of the projects. Project documentation we reviewed, including work plans, progress reports, and post-completion reports, rarely addressed the longer-term impact of the assistance achieved or expected or indicated how impact could be measured into the future. Other measures or indicators that capture the productivity of U.S.- assisted organizations or the extent to which U.S.-sponsored innovations are adopted in the country shed more light on the long-term impact and sustainability. Examples of such measures would be the percentage of judges or bailiffs that a government itself has trained annually using new methods introduced by U.S.-assistance or the percentage of law schools that sponsor legal clinics or include new practical courses in their curriculum. Although USAID has reported broad, national-level indicators for its rule of law programs, without indicators or measures of the results of its individual projects, it is difficult to draw connections between the outputs produced and the national-level outcomes reported. Furthermore, only 2 of the 11 USAID projects we reviewed in Russia and Ukraine have been independently evaluated to assess their impact and sustainability. State has recently recognized the shortcomings of its training-oriented approach to law enforcement reforms. As a result, it has mandated a new approach for implementing agencies to focus more on sustainable projects. Instead of administering discrete training courses, for example, agencies and embassies will be expected to develop longer-term projects. Justice has also developed new guidelines for the planning and evaluation of some of its projects to better ensure that these projects are aimed at achieving concrete and sustainable results. These reform initiatives are still in very early stages of implementation. It remains to be seen whether projects in the future will be more explicitly designed and carried out to achieve verifiably sustainable results. One factor that may delay the implementation of these new approaches is a significant backlog in training courses that State has already approved under this program. As of February 2001, about $30 million in funding for fiscal years 1995 through 2000 has been obligated for law enforcement training that has not yet been conducted. U.S. law enforcement agencies, principally the Departments of Justice and the Treasury, plan to continue to use these funds for a number of years to pay for their training activities, even though many of these activities have the same management weaknesses as the earlier ones we reviewed. Unless these funds are reprogrammed for other purposes or the projects are redesigned to reflect the program reforms that State and Justice are putting in place, their results may have limited impact and sustainability. The U.S. government’s rule of law assistance program is a key element of the U.S. foreign policy objectives of fostering democratic and open market systems in the new independent states of the former Soviet Union. However, establishing the rule of law is a complex and long-term undertaking. After nearly a decade of effort and more than $200 million worth of assistance, the program has had difficulty fostering the sustainable institutions and traditions necessary to establish the rule of law in this region. Consequently, many of the elements of the Soviet-style legal system are still in place in the new independent states. Though this program was originally envisioned by the U.S. government as a short-term effort, achieving more significant progress is likely to take many more years. Progress is likely to remain elusive unless the new independent states make legal system reform a higher public policy and funding priority and U.S. agencies address the program management weaknesses we have identified in developing strategies for achieving impact and sustainability and conducting performance monitoring and evaluation. Although the United States has very limited influence over the political will and domestic resources of these countries, it could better design and implement its assistance projects, both those currently funded and those that it may fund in the future, with a greater emphasis on measuring impact and achieving sustainability. To help improve the impact and sustainability of the U.S. rule of law assistance program in the new independent states of the former Soviet Union, we recommend that the Secretary of State, the Attorney General, the Secretary of the Treasury, and the USAID Administrator, who together control almost all of the program’s funding, require that each new project funded under this program be designed with (1) specific strategies for achieving defined long-term outcomes that are sustainable beyond U.S. funding; and (2) a provision for monitoring and evaluating the project results, using verifiable outcome indicators and measures, to determine whether the desired outcomes have been achieved and are likely to be sustainable. Furthermore, to improve the likelihood that project funds currently budgeted but not yet spent achieve sustainable results, the Secretary of State, the Attorney General, and the Secretary of the Treasury should jointly review the pipeline of projects and develop a plan for ensuring that all projects meet the above criteria, including reprogramming of unspent assistance funds, as necessary. We received written comments on a draft of this report from USAID and the Departments of State and Justice, which are reprinted in appendixes II- IV. The Department of the Treasury had no comment on the report. State, Justice, and USAID generally agreed with us that the program management improvements we recommended are needed. State indicated that it had already begun to undertake management actions consistent with these recommendations. State also suggested that we encourage the U.S. law enforcement agencies to cooperate in its ongoing efforts to reprogram or reschedule assistance funds that have been budgeted but not yet spent. Justice agreed that improved planning and evaluation of its assistance activities are needed. USAID agreed that improvement is needed in measuring project results and that greater emphasis could be given to reviewing long-term sustainability issues. We have modified our recommendation to emphasize the importance of cooperation among the agencies in resolving management weaknesses we identified. USAID and State expressed concern that our assessment set too high a standard for program success. These agencies noted that we did not adequately recognize the complex and long-term nature of this development process. They also noted that the funding for rule of law development has been relatively meager compared to the total amount of assistance provided to the new independent states and considering the magnitude of the challenge. Furthermore, the agencies stated that achievement of a fully functioning rule of law system could not have been expected in the 8 years that the program has been in existence. We agree that establishing the rule of law in the new independent states is a complex and long-term undertaking, and we have made this observation more prominent in the report. We did not use the full development of a rule of law system as the benchmark of success for this program, however. Instead we looked for sustainable progress in each of the key elements of the U.S. assistance program as well as in the overall development of the rule of law. We found limited sustainable impact from U.S.-funded projects in the various elements of the program that we reviewed. Furthermore, we found that by the one measure, the Freedom House rule of law score, which USAID and State used to measure overall rule of law development, the situation in the new independent states is relatively poor and has actually been deteriorating in some states. We do not agree that the program funding levels were necessarily a significant factor limiting the impact and sustainability of the program; rather, we believe that better results could have been achieved with a more conducive political and economic environment and with better planning and monitoring efforts. The agencies also indicated that we did not adequately recognize some significant program activities and achievements. These include the development of a more independent judiciary in Russia and adoption of a number of reforms in the criminal justice system. USAID also stated that its encouragement and support of legal system reforms have been a valuable accomplishment, though not always resulting in the creation of a sustainable entity to promote reforms into the future. In addition, Justice stated that its training courses have been more successful than we have given them credit for, both by helping to establish valuable working relationships between law enforcement agencies in the United States and the new independent states and by fostering the application of modern law enforcement techniques. Hence, Justice indicated that our assessment was overly pessimistic about the prospects for achieving sustainable results from its programs. State indicated that we failed to acknowledge a major educational exchange component of the program. Where appropriate, we included additional information or amplified existing information on program results and activities. In most cases, however, our analysis showed that there was insufficient evidence to draw a link between the outcomes the agencies cited and U.S. assistance efforts. USAID and Justice indicated that we did not adequately acknowledge the monitoring and evaluation systems that they currently employ in this program. USAID indicated that while it agrees that a better project-level results measurement is needed, it currently employs a system of program monitoring that allows it to manage the program effectively. Justice pointed to training curriculum evaluation that it undertakes to help ensure that its training programs are relevant and useful. We reviewed the information that both provided and have included additional information about them in our report. However, we believe that none of the agencies employed a monitoring and evaluation process to systematically assess the direct impact of its rule of law projects in the new independent states of the former Soviet Union and measure progress toward the projects’ long- term objectives and desired outcomes. State and USAID expressed concern that we did not rank the three factors that have limited the impact and sustainability of the program in order of importance. They believe that program management weaknesses are the least important factor and the lack of political consensus is the most important. Furthermore, USAID stated that any limitations in the effectiveness of the rule of law assistance program should not be attributed to its monitoring and evaluation shortcomings. We agree that the political and economic conditions in this region have created a difficult environment for U.S. assistance efforts and have revised the report to emphasize this point. However, we believe that improved management practices could enhance the impact and sustainability of the program, and we discuss program management weaknesses in detail in the report because the U.S. government has more control over this factor than the other two. Furthermore, insofar as project results are not routinely monitored and evaluated, the agencies’ ability to manage for results is impaired. As arranged with your office, we plan no further distribution of this report for 30 days from the date of the report unless you publicly announce its contents earlier. At that time, we will send copies to interested congressional Committees and to the Honorable Colin Powell, Secretary of State; the Honorable Paul O’Neill, Secretary of the Treasury; the Honorable John Ashcroft, Attorney General; the Honorable Donald Pressley, Acting Administrator, U.S. Agency for International Development; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix V. To (1) assess the impact and sustainability of the U.S. government’s rule of law program and to (2) identify factors that constrained impact and sustainability, we analyzed project documentation, interviewed knowledgeable officials, and reviewed assistance activities in the field. We obtained and analyzed information on the results of the U.S. rule of law assistance efforts funded between 1992 and 2000 in the new independent states of the former Soviet Union. However, we focused our review on four specific countries—Armenia, Georgia, Russia, and Ukraine. We selected these countries because they received the bulk of U.S. assistance, because the U.S. Agency for International Development (USAID) had designated rule of law development as a strategic objective in these countries, and because significantly more relevant information was readily available about the assistance activities in these countries than the other eight new independent states. Furthermore, based on our discussions with USAID and State staff and our review of relevant documentation, we concluded that the U.S. rule of law assistance efforts in these countries were typical of the assistance provided throughout the region. Thus, we believe that our report findings about the impact and sustainability of the U.S. assistance program are applicable to the entire region. To obtain detailed information on the impact and sustainability of specific rule of law assistance efforts, we examined projects funded in Russia and Ukraine since 1995, including 11 major USAID-managed projects and a variety of assistance activities managed by State. We selected these countries based on congressional interest and because they have received at least about half of the assistance provided under this program. We selected these projects because they were the most likely to have been substantially completed and thus have a track record that would allow us to assess whether they have begun to achieve significant results. We did not include projects initiated in 1999 or thereafter. Specifically, we conducted the following work. In Washington, D. C., we interviewed headquarters officials at the departments and agencies implementing rule of law projects in these new independent states, including the Departments of State, Justice, and the Treasury, and the U.S. Agency for International Development. We also met with individuals with expertise in criminal justice system reforms. For Russia and Ukraine, we reviewed Mission Performance Plans; USAID country planning documents; Department of Justice country work plans; and other reporting documents, funding agreements, contracts, and project evaluations. We obtained program funding information for fiscal years 1999 and 2000 from USAID and the Departments of State, Justice, and the Treasury, which we combined and analyzed with similar information we had obtained for earlier fiscal years in the course of previous work. We conducted fieldwork in Russia and Ukraine in August and October 2000. In each of these countries, we met with the Deputy Chief of Mission, senior U.S. officials representing agencies with rule of law programs in each country; and numerous program staff, including contractors responsible for implementing the projects. We interviewed host country officials at the supreme, constitutional, general jurisdiction, and commercial courts; justice and interior ministries; law enforcement organizations; and the Judicial Department in Russia. We visited training schools for judges and prosecutors, law schools, and several demonstration projects. We also met with numerous representatives from nongovernmental organizations and other groups representing a broad spectrum of civil society in Moscow, St. Petersburg, Petrozavodsk, and Yekaterinburg in Russia; and in Kiev, Lviv, and Kharkiv in Ukraine. Though we did not travel to the 10 other new independent states of the former Soviet Union or review specific projects in these states in depth, we obtained and reviewed all available evaluations of these projects to determine whether they have met their major objectives and to identify the factors affecting their success or failure. We also reviewed our prior reports on rule of law assistance, and reports on foreign assistance to Russia and Ukraine. Rule of law is a component of democracy building, and although a close relationship exists between activities, we did not evaluate other projects under the democracy program. We performed our work from July through December 2000 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of State’s letter dated March 16, 2001. 1. State indicated that it is working with law enforcement agencies to ensure that the pipeline of law enforcement training funds are used to achieve the maximum impact and sustainability. State suggested that we recommend that the U.S. law enforcement agencies cooperate with State in its ongoing efforts to reschedule or reprogram undelivered assistance. Based on our discussions with State officials, increased and continued attention and cooperation among the agencies will be needed before this issue is fully resolved. As suggested by State, we have highlighted the need for this interagency cooperation in our recommendation to the agencies. 2. State pointed out that our report failed to address the long-term exchange and partnership activities of the U.S. Information Agency and its successor, State’s Bureau of Educational and Cultural Affairs. We inadvertently omitted the financial data provided by State on these exchanges from our initial calculation of program funding, but we did include the exchanges in the scope of our review, insofar as time and resources allowed and as results were observable. We have revised the financial data to include the data on exchanges and also included specific mention of these exchanges in our discussion of the legal education element of the Rule of Law Assistance Program. 3. State noted that the community of nongovernmental agencies in the region was not as dependent on western funding as our report suggested, as evidenced by the large number of such organizations that receive no U.S. funding. The observations in our report did not pertain to the development of nongovernmental organizations overall. We noted questionable sustainability among those nongovernmental organizations in the rule of law field that have received a significant amount of U.S. funding under this program. The following is GAO’s comment on the Department of Justice’s letter dated March 23, 2001. Justice disagreed with out characterization of the extent to which law enforcement techniques taught in U.S.-sponsored training courses were being applied by training recipients. Justice stated that the data we cited supported the conclusion that its training has had significant impact and that greater application is likely to ensue as the efficacy of these techniques is validated through their use. Justice also questioned whether some additional data were available on the use of training techniques. We revised the report to include Justice’s interpretation of the available data, but we also indicated that, due to data limitations, we could not validate or dispute this interpretation. No further data were available for us to elaborate on the extent of the application of the U.S.-taught techniques. The following are GAO’s comments on USAID’s letter dated March 23, 2001. 1. USAID disagreed with our analytical approach to assessing sustainability and the emphasis we placed on sustainability in evaluating program success. USAID pointed out that certain organizations can have significant impact on rule of law development even though they may not be sustainable over the long term. We believe that our approach to assessing sustainability of the program is sound. In addition to reviewing the sustainability of the program’s component activities, we also reviewed the overall sustainability of rule of law development as reflected in the Freedom House scores. Both approaches raise concerns about sustainability. Furthermore, we assessed both the impact and sustainability of the projects we reviewed and have cited examples in the report where organizations supported by USAID have had some impact regardless of whether they were sustainable. However, given the long-term nature of rule of law development and the many competing demands for limited assistance funds, we believe that sustainability of program results is critical to program success and was an appropriate emphasis for our analysis. 2. USAID indicated that we did not adequately acknowledge significant program results in the area of commercial law. In general, as we had discussed with USAID, due to time and resource constraints, we did not assess the impact of USAID assistance in the area of commercial law. However, insofar as available evaluations provided information on accomplishments in this area, we included this information in our report. 3. USAID criticized the report’s use of references and quotes from evaluations as inappropriately taken out of context. We reviewed each reference to an evaluation and do not believe that we have distorted the meaning of the information cited, as USAID suggested. However, where appropriate, we have revised the language or used additional or alternative references in our report to avoid potential misinterpretation. In addition to those named above, E. Jeanette Espinola, Mary E. Moutsos, Maria Z. Oliver, Rona H. Mendelsohn, and Jeffery Goebel also made key contributions to this report.
For fiscal years 1992 through 2000, the U.S. government provided assistance to help the 12 newly independent states of the former Soviet Union develop the sustainable institutions, traditions, and legal foundations that ensure a strong rule of law. This report (1) assesses the extent to which the program has had an impact on the development of the rule of law and whether the program results are sustainable and (2) analyzes the factors that may have affected the program's impact and sustainability. GAO found that the U.S. government's rule of law assistance program has had limited impact so far, and results may not be sustainable in many cases. The impact and sustainability of the U.S. rule of law assistance programs have been constrained by several factors, including limited political consensus on reforms, a shortage of domestic resources for many of the more expensive innovations, and weaknesses in the design and management of assistance programs by U.S. agencies.